All right. Hi, everybody, and welcome. I am here with JJ Allaire. My name is Jeremy Howard, and we are having what I originally was very proud of myself for inventing the idea of a two-way AMA. I wondered why other people haven't come up with this idea, and then I realized, oh, I think I just invented another name for a conversation.
So this is either a conversation with JJ Allaire or a two-way AMA. We'll see if it turns out to be any different. So, good day, JJ. Thanks for joining. Great being here. So I always like to find out a little bit about people's environs. Where are you talking to us from today?
I'm talking from my home in Newton, Massachusetts. In Newton, Massachusetts? Newton, yeah. And where is that? It's like, it's kind of due east, due west of the city, sorry, maybe 15 minutes outside of downtown. So it's an inner ring suburb. And why there, is this where you've always been?
What's your favorite place in the world? Not even close. No, I started in, I grew up in, I was in Philadelphia till I was 13, and then I moved to Minnesota. I was there for a long time, till I was about 30. And then I moved to Boston because of work, because of the company I was working with, and then I just ended up staying here.
And then I'm here because the public schools here are great, and so I probably, all other things considered live in the city, but for the time being, I want to take advantage of the public schools. Boston's a great town. I spent quite a bit of time there when I set up an office in one of my earlier companies there, and I got very into the Boston Red Sox, as you do, and got very into the Tennyson Racquet Club, and I have a very good friend.
I used to be a member of Tennyson Racquet, and I have a very good friend there who's still a big place there quite a bit, so that's a gem. Yeah, that's a big town. And also, I loved it, because I didn't know anybody. It's definitely a town where you could just go to a random bar, sit down, watch the game, and whoever's around you will chat.
It's interesting to talk to. Yeah, that's true. So you end up, you're in Australia now. I am in Australia now. And you were in Australia before. Yeah, so I'm in Queensland, which is a kind of, well, the part I'm in is kind of a subtropical beachside town. It's not a resort town, but it's kind of like the nearest capital city is Brisbane, which is like four or five million people, and it's the nearest kind of beach town that people would go to for a weekend or something.
So yeah, I always wanted to live in Queensland. Never understood why everybody didn't want to live in Queensland, and now that I'm here, I'm even more convinced everybody lived in Queensland. And the university is in Queensland, too. Yeah, so the university's in Brisbane, so I'm a binary professor at the University of Queensland, which is 45 minutes from here.
So when I teach there, I just drive in and do my thing, drive home. Cool. And in between, you were in Boston and California. Yeah, so I grew up in Melbourne, which seemed like the center of the world at the time, and I never understood why people talked about Australia as being far away, because it seemed pretty close to me.
But then, yeah, moving to San Francisco, I stayed there for 10 years for a previous startup called Kaggle, and suddenly realized, yeah, God, Melbourne is a very long way away, physically and everything else. And I do now feel like it's a very good experience for somebody who grows up away from a kind of intellectual center like that, just spend some time living in one, just to experience that.
Yeah, for sure. And yeah, so tell me about what you're doing now. So you're the CEO of RStudio. RStudio, which I started that about 11, I don't know, 11 or 12 years ago. And that was started off originally as just an open source IDE for R. And it was just it was not actually intended as a company.
It was just me and one other person. And we had worked on lots of development tools and programming languages and authoring tools and in our previous lives. And I had been involved in graduate school and as an undergrad in social sciences and statistical programming, social sciences. And I sort of originally that was what I wanted to make my career.
And then I kind of got swept up into software. And so when I finished with the startup, and then I found out about R, and I said, wow, there's an open source statistical programming system. That's cool. I really would like to work in open source. And it sort of, as you know, is written by statisticians for statisticians, which gave it a lot of things, you know, a lot of things that got right.
But then some of the software tooling part they struggled with. And so I said, well, here's I can make a contribution here. I know the tooling part. And and I'd like to see this project get used by more people. So we just started working on that on the IDE.
And then so that was just a couple of us. And then long story short, we ended up getting to know Hadley Wickham. And he was working on what was then not the tidyverse, but the dplyr and ggplot and things. And we said, let's let's all work together. And then that sort of begged the question of well, how, how is it that we're going to all work together and make sure everyone gets paid and everything.
So he said, well, let's try to make a company out of this. And we did that by sort of, you know, building sort of enterprise grade, sort of servers that made it easier to adopt a lot of our open source software. Yeah, I knew Hadley from before our studio, because of course, everybody in Australia, New Zealand knows each other.
So yeah, I remember actually hanging out with him in in Texas. And he was in at Rice University. Yeah, that's right. Yeah. And he was already famous for his amazing contributions. And he was saying to me, you know, saying like, wow, you know, the university must love having somebody like you there.
And it's like, no, quite the opposite. You know, they don't appreciate it at all. And I got all to get support for what I'm doing. Yeah. And I just like short shit, you know, what a what a terrible thing about academia is going on here. I know. And so glad when he found you here, you know, you found him and he found you and that worked well.
Yes, it has. So that's good. And then the companies has developed well. And so that's, you know, afforded one of the projects I worked on was our markdown, which is kind of a literate programming system for our and that actually started working on that about 10 years ago. And, and that we had a lot of success with that.
But it was like, very, it was quite narrow in a sense. And that it was like, why did you do that? You had some previous interest in literate programming. You know, honestly, I there were two things that happened. I there were a bunch of the I was working with a bunch of faculty who were teaching are and they were teaching, they were everybody was trying to, well, they were teaching at the time, as we've, which was this sort of latex based literate programming environment that was built into our and they were doing that because they wanted to teach people that are programming and reproducible workflow, but that they're teaching them latex, which was really, but that's unusual already, right?
Like not many people. That's unusual. Our had this thing as we've built into it, you know, in like 2007 or so, I mean, they were way ahead of, or even before like, so, so are always had this sort of in the community. And it was, it was actually one of the core members of the art team who built this as we think so they're pushing this literate programming idea.
So I kind of got infected with it by exposure to that. And then at the same time, I went to use our in 2012 in England. And the, one of the people who presented was presented to a three hour seminar on org mode and presented another system for literate programming that was more, you know, human readable ASCII oriented.
And so just to clarify for people who haven't seen it. So org mode is an, is an E max. It's not just a mode, but it's also a file format. That's which is in many ways a lot like Markdown. It's not at all compatible, but it's the same basic idea of text-based, you know, format, but also in org mode, your code can kind of be evaluated and the, the executed results of the execution appear in the document.
So it has a lot of like what our Markdown is, right? It's kind of like executable code, the outputs appear. That's exactly right. So, so it's sort of like this idea. Well, we've got to ask, we've, it's really hard to teach people a lot of tech. Some people were saying, well, is there a way we could get this into office?
Can we get, can we get through this with open document? How are we going to get people to do this without while not burdening them with learning a lot of tech? When I saw org mode, I said, wow, that's a better idea to me and more just ask on human readable ASCII based idea.
But at the time, Markdown was already really taking off and it was already in use on GitHub. It was in Houston, a bunch of Wiki systems. And so I said, let's, let's take the, the core ideas of org mode and sweep and build a Markdown variant of that. And, and I did it with R because that's the, that's the environment I was working in.
It was just, it had sort of blinders on like, let's just make this work in the environment. And you were personally like doing stuff with literate programming yourself and it found it useful or your, I found it useful because I was building websites and documents. And yeah, I, I definitely was thought this is a great way to work.
And then, and at the same time, Yihue Z was created, had created a package called KnitR that was sort of a replacement for sweep. It was sort of a better, like sort of feature enhanced version of sweep. And at the same time he made it open so it could do restructure text and it could do any ASCII doc and it could do Markdown.
And so he, Yihue and I got together and said, let's create this thing called R Markdown, which basically says we're going to use Knitter as a computational engine and we're going to use Markdown. At the time, it was just like, we basically use sundown, which was GitHub's Markdown processor.
And we added math, you know, so that was pretty straightforward. These tools all have R in them. Are they all exclusively R tools? They, they are, they require R to run. They're pretty much R. They now, they're multi-engine, so Knitter has this idea of engine. So there is a Python engine and a Julia engine, but you're, you're calling Python from R.
You have an embedded Python session in your R session. You have an embedded Julia session in your R session. So like, it's very R-centric, even though it's multiple languages, it's very R-centric. So, so yeah. And then, so we did the first iteration of it and you could just make, you could just make web pages.
And then at the same time, Pandoc was kind of evolving and people were trying to figure out, they were like, oh, let me just glue together our Markdown with Pandoc and then I can make board documents and PDFs and so on. That's, that's going to be something a lot of people are not familiar with.
So Pandoc is a, basically a Markdown processor. It's, it's, I think it's written in Haskell, right? Although it's a compiled binary, so that doesn't matter for most people. And yeah, it's kind of like a pretty, I mean, it is a kind of a Markdown processor, but it can take almost any input and convert it into its Markdown and then convert that into one of the steady app ports.
Any text to any text. Yeah, it doesn't actually even convert it to Markdown. It converts it to an internal format. That's a sort of abstract document. And so like, if you're going Word to PDF, it's never seeing Markdown. It's just going. Yeah. So JJ, I had used Pandoc before talking to you about all this stuff, but I had used it in this very kind of naive way of just being like, Oh, I've got a document, you know, HTML document, and I want to convert it to LaTeX or we're going to convert LaTeX to Markdown or whatever.
And I just run it. Now, what I've learned from you is that actually, you know, Pandoc has this like embedded Lua interpreter and this kind of very generic system, kind of a bit like NB convert the notebook. Yeah, that's right. Yeah. Takes this input as a kind of abstract syntax tree.
You can munch it however you like. You spit it back out. You can fit that anywhere in a Pandoc path to kind of construct your own. It's like a document. It's a pipeline of transformations to the document. And the most obvious of which is I just want to make a PDF or I just want to make a Word document or a web page, but there's other.
Yeah. And the other thing to mention is, I mean, as you said, it doesn't particularly require Markdown, but you know, by the way, you know, Pandoc Markdown is this fairly universal format because you can express things like divs and classes and layouts. And then, yeah, the Markdown syntax, you can express the whole Pandoc AST in Pandoc Markdown.
Yeah. So it's a kind of a Markdown on steroids. One of the ideas that, though, that was taken really seriously by John McFarland when he created Pandoc was. So the original Markdown had the idea of raw HTML because the idea of John Gruber's idea was like, this is just an easier way to write HTML.
So of course you can put raw HTML in there. If it's something isn't in Markdown, just go ahead and add the HTML. So that's a good idea. But what he had, and he was interested in creating technical manuscripts. So he extended that to, you can put raw LaTeX in there.
And so he basically said also you can have raw LaTeX. And he made it so it was very good at generating LaTeX. So he sort of added this. Yeah, because there's also Pandoc citations, for example. And citations, right? So he added this idea of let's take LaTeX really seriously in a way that other Markdown processors tend not to, because that's not really their use case.
There are a lot of them are tied to like content management systems and things are producing web content. And then let's take citations really seriously. So they had a really robust implementation of citations and integration with citation style language. So really first class citations and support of LaTeX, and then ultimately support of Office document formats and open document and things like that.
So it was a more elaborate, comprehensive, hackable version of Markdown. So when we migrated, we created sort of our Markdown v2 was based on Pandoc. And then-- And how long ago was that? That was a couple of years. That was about eight years ago. Pretty early on we moved to Pandoc, maybe even nine years ago.
Pretty early on we just moved to Pandoc. And then kind of to make a long story short, we created a lot of extensions to our Markdown. We created a thing for making books, and we created a thing for making blogs, and we created a thing for presentations, and for kind of like fancy grid layout of documents.
And so we had all these-- we did a version of the Distill Machine Learning Journal from Google. If you've seen those articles, we made an our Markdown version of that. So we sort of innovated a lot in a very fragmented way. And so we ended up at the end of this with-- we have this system that has a lot of functionality that's fractured across a bunch of packages with a bunch of inconsistency that's R only.
And so we said that is kind of a dead end in terms of having a bigger impact on scientific computing. And so we said, if we could take a step back, build a system that was agnostic to the engine, the computational engine, and at the same time try to roll up a lot and synthesize a lot of the ideas that we developed over that 10-year period into kind of one uniform system, then that would be kind of what we needed to do to really like continue investing in a way that we felt like this project is going to be meaningful in decades.
So we kind of-- it was almost like take a couple steps back. And that was a couple of years ago. We said, let's start working on quarto, which is a language independent engine agnostic where the first two engines supported our Nitter, which was what we supported in our Markdown and Jupyter.
And so those are sort of equal citizens. And it is possible to-- Let me just get that up. So yeah. OK, so here's quarto. OK, so this is what you're working on. That's what I'm working on now. So that's pretty much what I've been working on for directly or indirectly for about the last three years.
And this looks a lot like Markdown. It does. Yeah. It is derivative. It's syntax. And an approach to things is derivative of our Markdown. And so you've got some YAML front metas, so some metadata, which is supported by Pandoc, I believe. Then you've got some Markdown. This looks like something that's not in any Markdown I'm familiar with.
That's right. That's a cross-reference. OK. So it's saying I want a reference-- Here is the label. And so now we've got, as a result, the Markdown here, the metadata here. The code is also folding. And I guess I can't click on this picture if I could. And a hyperlink.
This was all the cross-reference. It's numbered the figure. It's only figure one. But if there were 17 figures, you'd see one, two, three, four, et cetera. So that's kind of the idea. Had interactive documents as well. Yep. Yep. So we do integration with Observable and Jupyter. So really with Jupyter, we put the most effort into Python and making everything work great in Python.
We've put some effort into Julia. Any Jupyter kernel works with it. But if we do a little extra work, then it works better. So yeah. So I mean, seriously, anything works. I've been recently playing with APL. And I created the first ever APL kernel. Nice. And so here's links to APL cliff documentation.
Yep. Here is auto-generated table of contents. That's cool. And then here's a Python one. Yeah. So yeah. So that's what I've been working on. And I know the way that you and I got connected, well, we got introduced separately. Just hey, you should get to know each other. And then we got to talking about.
Yeah, that's right. And then author of advances and error and such luck. Yes. Yeah. So Wes introduced us and it was like, what are you working on? What are you working on? And we just, you talked a little bit about nbdev2 and literate programming. And I said, well, this cordos might be related to what you're doing.
But it might be. I mean, I already knew, very much knew you by reputation, because I was not a big user of ColdFusion, but I was an enthusiast of it, which I can come back to and talk about that. I was a big user of Windows Live Writer. So these are both things that you would build.
And Windows Live Writer was something which felt like, it reminded me of the original Mac OS graph calculator. It felt like better than all of the other things, like, because it came from Microsoft, it felt better than all the other things that were around it somehow. And I thought, like, how did something so come up in the windows, what I call windows extras or whatever it was, windows plus windows.
Well, yeah, there was like, anyway. Yeah. So yeah. And then I remember it at University of San Francisco, one of our admin staff said, Oh, there's a, just got this request from a guy from, you know, who's thinking of flying in for the lessons. You know, you might want to get in touch with him to see if that's suitable.
And it's like, what's his name? It's like it's a guy called JJ Allaire. And I was like, oh, JJ Allaire is interested in fast AI. That's really cool. Well, the reason I was going to do that was I was working on, I was working, creating an R interface for Keras.
And so I had done, we had done R, I created the R interface to Python, which is called Reticulate. And then we built the TensorFlow interface. And then I was building the Keras interface. And I said, well, I'm going to go take Jeremy's course in Keras. And then I found out, wait, it's not in Keras anymore.
Right? It's yeah. And I said, okay, I would still like to take the course, but it's less right down the middle of what I'm doing. So I didn't do it. But I actually had convinced one other person to do it with me. Although you did tell me that some fast ideas did end up in some of your.
Yeah, yeah, yeah. So so yeah, so studying fast AI is we, especially as we did our PyTorch work. And we, because as you know, PyTorch doesn't doesn't offer you much in the way of like a built in training loop. Right. And it doesn't really organize your work. No, it's a Keras does.
Right. And I think we rather liked the things you did in fast AI. And so we said, let's can we do can we do some variations of those, you know, for our interface, because we clearly it wasn't enough to just say, Oh, you can use torch from our I mean, it's for some certain researchers, it's fine, but not for end users.
So yeah, I mean, I try to encourage even researchers not to just use raw PyTorch for everything. Because, you know, you really want to be incorporating best practices as much as you can. Not I didn't have a couple since we're on fast AI, I did have a couple questions.
And one of them is like, if you think about how you help both new users ramp into things and make experienced users productive, right, you provide these abstractions. And there's a dial of how leaky you want, you let the abstractions be all the way from Hey, we've hidden you don't even know PyTorch is here, at one end, the other end is learn PyTorch, then you know, learn our special shortcuts.
And in the middle is somewhere like, well, PyTorch is present, it's not hidden. You can probably extend this with PyTorch. And, you know, like, I think different software design problems lend themselves to different levels of leakiness. How did you think about that? Or do you think? Yeah, so I've been coding for 40 years, you know, and I spent a lot more time coding than building deep learning models, and a lot more time reading and studying coding and deep learning.
You know, software engineering is based on our ability to do good things with computers is based on being able to use abstractions. And those abstractions are turned are based on being able to use abstractions and, you know, so forth into machine code. We're hidden on the hard disk controller, you know, etc.
You know, there is none of those levels of extraction is the correct level. They're all correct for what they do. So with fast AI, my approach, you know, has always been just the same as all the coding I've always done, which is if I'm writing some high level API, I write it using some lower level API, which I then write using some lower level API, and so on until I get to the point where it's, you know, that each of them is trivially easy to use, ideally, and is a kind of carefully designed set of primitive operations that make sense at that level of API.
So for example, the high level, so there's three main levels of API at fast AI, the high level mid tier low level, the high level API is focused on applications. We provide support for for which is vision, text, tabular and collaborative filtering. And then there are other folks in the community who have added stuff around, you know, medical and audio and whatever.
And in each case, you basically use the same four lines of code. Okay, that kind of just like push button interface, if you yeah, like, and that was the recipe. Yeah, and that was very much designed about the idea that one day, we want to get rid of the code, and there'll be a higher level API still, which is not good.
Yeah, yeah, this is what I wanted to ask you. Well, when you finish, I want to follow up question. Okay, cool. The this is really important for stuff like deep learning, because the more boilerplate you have, the more things there are that you can screw up, you know. And so if you have to like, manually create your validation set manually, make sure it's not shuffled, and manually make sure the training set is shuffled, and manually make sure that the augmentation is only applied to the training, like, each of those is something that you're reasonably likely to forget.
And when things break in deep learning, they don't break properly. Generally, they don't give you an exception, or a sec fault, they just give you slightly less good answers, or it's leading or misleading metrics. Yeah. So, so then the mid tier API is the bit I'm most proud of.
And I find that's often the hardest bit to write, you want something that's extremely flexible, and that you almost never have to go deeper, but still really convenient. And so for example, we've got a thing called the data blocks API, which came from me, you know, I've been doing machine learning for, let's see, over 30 years now.
And, you know, I just thought back to like, well, what are all, what's the entire set of things I've had to do to get data into a model training. And I, you know, realized that there was just okay, there's like four basic, four or five basic things. Yeah. And I realized that when I pulled out those four or five basic things, the huge number of classes I used to have before I built the data blocks API, I realized I could replace them with just these five things by putting the blocks together.
And so I was able to reduce the amount of code I had by 10 fold, and increase the ability for me to write my high level API a lot, and then to give the same thing to all my users. And then, yeah, the bottom level API, it's still above PyTorch.
Well, it's mainly like filling in the things that aren't in PyTorch, but should be. So for example, I like using some object oriented programming. And I believe that types should represent where possible semantic things. That's something which that doesn't really exist in PyTorch. So I added object oriented types, semantic types to PyTorch.
Something that they've added, it's still not amazing. But we created first is like a computer vision library that entirely operates on the GPU and does things in a really efficient way. So kind of stuff like that. So then the idea is that a user, we want them, if they're doing something supported by our application API, we want them to be able to use it.
We want them then to be able to say like, okay, that worked okay, but I wonder what if, you know, could I make it faster by doing this? Or make it more accurate by doing that? And they can just pull out one piece and replace it with a mid tier API thing, you know?
So rather than starting at the bottom, and then adding, you know, simplifying things with a high tier, start at the top, which is also how we teach, you know, and then add in lower level things if and as you need them. Did you have a goal, like, kind of what I'm thinking about, like for leaky abstraction, do you have a goal where it's like, well, if someone has found, and I have not personally used PyTorch, but I use Keras quite a bit, if someone finds the equivalent of a layer, you know, someone has written a layer for PyTorch, they find it on Stack Overflow, how do I, you know, you know, reduce the error here, whatever.
Oh, do this. Is it, you know, one level would be like, oh, you can literally just, you know, point to that, or another level would be like, you kind of need to package that. You need to put that in a frame that the vast AI can consume. Yeah, so everything, you know, the idea is basically that everything should be very easy for you to grab stuff from elsewhere and just use it.
So we actually have, you know, so we've got a bunch of integration, for example, but in particular, you know, there's like, okay, what if, yeah, that's a great virtue of a system, if it can do that, yeah, then it then it doesn't suffer from the we have to do everything.
Exactly. Special packaging, special wrappers. So what I did was I grabbed for this one, I actually grabbed the MNIST training code from the official PyTorch examples. Yep. And they originally had it as a script, so I just changed it to a module, you know. And so I, so here, so this is their code, right?
So I took their code. And then I said, okay, well, how could, what if we wanted to replace their training loop and test loop? That's a lot of code, right? And it's also not a particularly good training loop and test loop with the fast AI one. And by using the fast AI one, you're going to get for free things like TensorBoard and weights and biases integration, you're going to get, you're going to get all kinds of metrics, you're going to get automatic mixed precision training, whatever.
And so the answer is that you can take all that train and test stuff and replace it with these two lines. That's great. And then run this one line. And this is now also going to run with one cycle training. So it's going to do a warm up, it's going to do a cool down, it's going to print out as it goes.
And that's literally it. Fantastic. And it's the same for other things, you know. So for example, you know, I grabbed the PyTorch lightning quick start converted to a module. And so those data types, the data types that are used by fast AI, since they're fundamentally the PyTorch data types, that's how it all fits.
They're not obscured. Yeah, that's either true, or we recreate our own API compatible versions. So for example, the PyTorch data loaders are things which take things that are either indexable or streamable one item at a time and batch them. And we created something with the same name. The fastai.data.data loader.
And then we added stuff to it. We said, oh, we had a bunch of callback hooks that you can modify the data, you know, after it's been batched or after it's been turned into an item or whatever. So when I was thinking about your application layer, because I know like in your course, you say you need to, you know, high school mathematics and some programming is what you need to be able to learn this.
And my question is, you could imagine, and I don't even know if this is a good or a bad thing. So it's more just a question. You can imagine, you know, as you said earlier, an application that does transfer learning and, you know, takes various types of data that's well known and lets people say, oh, I'm doing computer vision.
Or is that the right layer or not? Right. Do you think that's a desirable layer to have or is the are you at the right layer now where the person will encounter enough complexity that they really best know some math and know some program? Yeah. So you can see where it would not be desirable to go further.
Yeah. So the answer is so far, we've we failed at our goal to make deep learning accessible because we require high school math and a year of coding. And that's not accessible because most people like I think only 1% of the world has like that coding background. So the goal has always been to get to a point where I use the analogy to the internet, right?
So when I started on the internet, you would have to do it all through the terminal. And even when the first GUI things came in, you would have to set up like PPP configuration files, whatever. And, you know, I'd read NewsNet News with RN, which, you know, with all these arcane keyboard shortcuts, I mean, I loved it.
But it wasn't the most accessible thing. Nowadays, you know, my mum, who's 83, uses the internet every day to chat to her six year old daughter on Skype and whatever. That's what most, you know, AI should look like. Okay. We're starting to see a bit of that with things like Codex and DALI Mini and DALI 2 and Mid-Journey and whatever, GPT-3, where, you know, I don't know if you saw it yesterday, a book on OpenAI Prompt Engineering came out.
In fact, I'm gonna see if I can find it because it's quite interesting. And so basically, it's like there's still skill involved in trying to create beautiful and relevant images using DALI 2. But it's not coding. It's a different skill. It's Prompt Engineering. And okay, I think I found it.
So let me share my screen here. And I like this because we're all about domain experts, you know. And so, you know, here's a whole book about how to create nice pictures with DALI. And it doesn't have with lots of examples of nice pictures from DALI. And there's no code in it, right?
It's saying like, oh, we've done some research to find out what kinds of words create what kinds of pictures. Here's examples of that for you. And that's like someone learns, essentially, here you learn a craft of how to see the right sorts of things. It's totally different than programming.
Right. And it requires like a genuine understanding of domain. So if you want to create good camera shots that don't exist, you have to know about words like "experience close up" and "sinisterial 800T". Yeah, well, you can become very, very good at this. You know, extreme long shot. And even like describing shadows and proportions.
This is the kind of thing we want people to be spending most of their time doing. And also the kind of people I want to be doing it are domain experts in that field. So we want, you know, product marketing people, you know, product photography people using their product photography skills to create product photography mockups.
We want disaster resilience experts to be doing disaster resilience. We want radiologists doing radiology, supported by AI, you know. Right. So yeah, so the tool that you would build for a radiologist, I mean, in a way, you could even have it, you can imagine a radiologist is training a model, basically, in a way, they're doing transfer learning, they're applying their data, they're there.
Yeah. But it's in their DICOM viewer, you know, on their radiology workflow software. Okay. Well, I think the answer is that you would like to go quite a bit farther than you have. Right with that. I don't quite remember what we said at the time we started. So when my wife, Rachel, and I started fast AI, we just I think we were thinking it's at least a 10 year goal, and of making deep learning more accessible.
And like our first step was, well, we should at least show people how to use what already exists. So that's why we started with a course. That was the first thing we built. Because also, that way, we would find out well, what, what doesn't exist, but ought to, you know, and so then it was like, well, basically, nothing works except vision, computer vision at the moment, we should at least make sure this works for text.
So step two was, I did a lot of research into text, and I built the ULM fit algorithm and integrated that and, you know, so there's a lot of research to do. And then then it was like, okay, well, from the research we've done, we've realized that there's a lot of things that you could do a lot better if only the software existed.
So then step three was to make the software exist, you know, so then there was a lot of coding. And then, you know, come back full circle, do another course, you know, now showing here the best practices using everything we've learned and built. Where are we now, you know, and so repeat this.
So we've, we're just about to launch version five of this, of this process, which, except for a year off, for COVID has been an annual exercise. Yeah, I wouldn't be surprised if in the next five years, we have quite a bit of the like code free stuff that we're aiming for.
Okay. Yeah. Okay. All right. All right. My turn, if I may. Okay, go for it. You got it. I wanted to change track a little bit, if I can, to talk about your background, JJ. And the reason for that is I like to understand the background of people who are doing interesting things in interesting ways.
And like, what are the ways I find you interesting is that your title is CEO. But in an interview, I read, you said you spend about 80% of your time doing coding. And I know from personally interacting with you over a lot over the last few months on building nb dev two that, yeah, you know, generally speaking, if before I go to bed, I send you a message saying, there's a bug here, then by the time I wake up in the morning and say, I fixed the bug is is the commit, you know, so that's unusual, you know, and I also it's unusual that I feel like you I don't know, you seem to do things differently to most people like you do you if you know, you feel more like a kindred spirit to me in a lot of ways that like you seem to like doing things reasonably independently, but leveraging a small number of smart people.
And, you know, I was also interested to learn that, like me, your academic background is non technical, you did, I did philosophy. You know, I'd love to hear like, yeah, what, what, what was your journey from doing Paul sigh? Yeah, who founding kind of three, at least three successful software companies are now working in scientific publishing?
Yeah, yeah. How did that happen? Well, it really, it started with, well, there's a couple of different threads that come together. So one was how I got interested in data analysis, and statistical computing was, I was a huge baseball fan. And I when I was like 12, I got a hold of books by Bill James, who you probably have heard of.
And he was a he was a math teacher from Kansas City, who wrote the Bill James baseball abstract that essentially created this idea, why don't we empirically measure everything we can about baseball and see what, see what's true and not true. And I don't know any other sport that has a whole field of academic study of statistics named, you know, sabermetrics, you know, based on that.
And he started all that anyway, but what was impactful for me was, I was also very interested in politics, my parents were political activists, and I was mostly interested in politics, I was interested in baseball, I got the Bill James memo. And I realized like everything that people said on television, about what was true about baseball, not everything, but a lot of the stuff was just nonsense.
The coaches, players and broadcasters, nonsense. So that had a big impact on me. I was like, well, if that's true, then then a lot of the things people say about a lot of things are probably nonsense. And probably data analysis is actually really fundamentally important. And so I kind of got then when I was looking at political science, that was my lens.
I was actually happened to, I happened to find a great mentor in college who was also really into it. Can I just mention, I had a similar background, but for a totally different reason, which is I started at a big management consulting company when I was basically 10 years younger than everybody else.
And they all worked using their expertise and experience, which I didn't have. So my view was like, oh, I'm going to have to use data analysis because of the ways I can. Yeah, so, so I was anyway, political science, and I actually was convinced I wanted to be a political scientist, focused on data analysis and things.
And so I basically went to graduate school and to get a PhD in political science. And by that time, I actually had taken a year off and I'd worked at the Minnesota Department of Revenue as an analyst. And I used a lot. I had done plenty of messing around with software.
I had learned, you know, D-base and hypercard and, you know, various other kind of, you know, scripty things that a layperson could access. I wasn't, I had no training in computer science and I didn't take computer science in college, but I was able to get my head around things like hyper talk and D-base and things like that.
So, yeah. And so then, yeah, and SAS and, you know, all these kind of, I was so exposed. I remember re-reading you were doing stuff with SAS and SPSS, you know, which are some things I worked through. Yeah, SPSS, you know, Excel macros. So I ended up at the Department of Revenue.
I did a lot of SAS. I did a lot of... They're very pragmatic programming tools. Very pragmatic. QuattroPro, you know, all this. So, and so then I got to graduate school and I just found like, wow, I just really care a lot more about software right now than I do about political science.
It was actually at that moment when it was 90, uh, 92, 93, uh, when, when it was, software was really coming into its own. Can I just ask that discovery? Yeah. Were you okay with that? Because, because I wasn't, you know, for me, I felt embarrassed. I did, because my mentor, oh my God, I spent four years, five, I spent so much time with my mentor and, you know, I just was like, wow, this is, I know what I'm supposed to be doing and this is not what I'm supposed to be doing.
Right. But I really just went with the evidence of like, when I go to the bookstore, I spent all my time in the computing section and that lights me up and that's what I want to talk about. And I think you had more self-confidence than I did. Well, I also had a negative, a negative experience with, um, academia, even though I had, I had a couple great professors, um, it, it didn't feel like I was going to, you know, um, I didn't feel like I was going to succeed.
Even if I was into that, I didn't feel, it didn't resonate when I got there. Um, and so I was like, well, I'm not going to do this and I think I want to do that. So I'm going to go try it. So I basically went off and said, I'm going to, you know, I'm not trained to, to write software.
I need to learn a bunch of stuff. Uh, and I went and started, you know, teaching myself a bunch of stuff I needed to know. And then I eventually got bootstrapped into doing some contracting. Um, and then I, so I sort of was a contractor and kept learning stuff.
And then I kind of by happenstance and good fortune ran into the internet. Uh, and I had actually worked with my brother on. So when was that roughly? That was in 90. Um, well, we got, we got the internet at, at college my senior year. So that would have been 91.
So we had, um, and then the web was 93. And, um, my brother was really into the internet and he was going around the twin cities. You know, he got city pages, which is the, the news public, the look, you know, the, um, the city newspaper, he got them to say, we're going to do classifieds and forum, and we're going to do all this stuff on the internet.
And then I, and he was like, my brother doesn't write code. So he's like, Hey, JJ, you're, you're a contractor. What are you, what can we do this? I was like, sure. I can figure this out. So I did that. And then I was just like, and I, that the other thing that the big thing that happened for me was that I was a fan of these tools that let ordinary people program.
I was a fan of debase and hyper talking on spreadsheets. And so I was like, that's really empowering. And so when I, what happened was I said, wow, you know, um, my brother just told me he's going to learn Pearl so he can write websites. Yeah. And I'm, and I'm looking at what, and I'm looking at what I did.
I let pills so I could write websites. I'm shoveling data in and out of a database and putting it through like a template, you know, and mapping form fields to database. Like this is not, we don't need Pearl here. You know, I mean, it turns out to do, to do fancy stuff, you need the equivalent of Pearl, but to do the most basic things you don't.
And so that's what I kind of came up and I, and I always, I loved the idea of tools and abstractions and making computing accessible and programming accessible. You know, I think the first one of those tools for the web was, was Australian. It was a hot dog. Do you remember that?
That's right. That's exactly right. It was. Yeah. So somebody, yeah. So I, I did. I, I kind of said, well, I'm going to take a shot at, at making a tool and see what happens. And that was called fusion. So, um, and so, and that, I would say the other, so here it is called fusion.
It still exists. It's still, uh, it's still, uh, that's kind of Adobe now developer week happening and looks like now. So what, um, yeah, what year was the first version of this? Uh, 95, 95. I mean, that's good longevity. That's good longevity. That's right. That's right. Yeah. No, it's, uh, it's, it's, it's, uh, it's had, it's had a great existence.
And, uh, um, one of the, um, one of the big ideas though, that I, that I learned, one of the biggest things I learned when cold fusion came out, there were probably 10 tools that, uh, did the same ish thing. Was this before or after front page? Cause that was huge.
It was concurrent with front page. And front page didn't really do this front pages. No, it didn't. Yeah. So it was, it was concurrent with front page. And basically the, the two of the biggest differentiators were, um, we had her basically really good documentation and really good error messages, you know?
And we just, I mean, we'd see competitors that had twice our feature set, get no adoption. And what language did you write this first? Okay. So I mean, but I don't remember a point at which you said you learned C plus plus when I left graduate school, I learned C plus plus.
Yeah. It took a couple of years and, and I did the, the city pages, you know, project, and then this was my first serious project. And now I wouldn't say I was good at it at that time, but I was certainly enough to, to ship something. So, um, yeah.
So, um, so yeah, so that was that. And, um, that was a great experience. And I learned a ton from that. I also learned that, as you were saying, you know, I, I didn't particularly relish the parts of entrepreneurship that didn't involve product development, you know? So, and there are a lot of those are really important things that need to happen.
Right. So nowadays, I think you said you, you said before you, you delegate that largely to your president. Yeah. The president of the company runs everything. And I, I do get involved with the, you know, company strategy and certain, there are certain things that really important for me to be a part of, but then I try to, to like preserve that roughly, you know, 80% of my time coding.
And I actually think that the, it's not just an indulgence. I actually think that great products need to have people who are aware of the whole matrix of what's going on. Why is this important? Why is this feature important? What users are important? How do users think that stuff close to the keyboard is imperative?
And a lot of times that's, that's, that doesn't happen because somebody else. Yeah. Somebody else I've spoken to who has a similar approach is Michael Stonebreaker, who's built a lot of the best database tools in the world at many companies. And yeah, he told me he, I mean, he's also an academic, you know, so he kind of invents stuff and then finds a trusted partner to bring it to market with.
I don't think he's ever called himself a CEO. He kind of pulls himself CTO, but you know, it's his vision and somebody else is running the admin. He creates this thing and gets, and has the, there's conceptual integrity in what he creates and he gets all the, all the trade-offs.
I mean, there's like seven trade-offs a day that you made. Oh, he's in Boston, right? Now I think about it. That's right. Yeah, he's in Boston. Yeah. I, I, I met him once and happy and I met him once. So that was, that was mostly fun. Just watch those two of them talk.
All right. I should let you have a go at a question. Yes. Well, I was, I wanted to get into a little bit of getting back to nbdev2. Oh, please. So maybe just to orient the listeners who haven't seen nbdev or nbdev2. I mean, I, you've taken the, you know, notebooks further than anyone thought possible and have created something really, really incredible.
And so I would love to hear, or I think other folks would love to try to hear general framing of what that is. And I have some follow-up questions about it. Yeah, sure. So, I mean, one of the best things I received was when the original creator of Jupyter and IPython notebooks sent me an email and sent this blog post.
He's printed out and put on his wall and he shows it to everybody who wants to understand what, you know, notebooks are meant to be all about. And basically, I really enjoy writing code in notebooks. And this is what, this is what my notebooks look like. So this is a bit better here, but this is the first few cells of the first notebook, which is used to generate nbdev.
And when I first started, I didn't know anything about notebooks internal. So I had to figure out what is a notebook. And so I wrote this thing that reads a notebook, and then I look inside it. And as I do that, I'm a huge fan of the scientific idea of journaling, right?
Most of the world's best scientists have been very thoughtful about how they journal, you know. So for example, the discovery of the noble gases, you know, was something where basically, you know, this left over little bit of residue because the scientists have been so careful about the process and journaling the path, they recognize that shouldn't be there.
You know, it's not I made a mistake, throw it away, but it's like, let's look into it. Like, it helps with a rigor and knowing what's going on. So I like to document what I do as I do it. And I also know that at some point, I'm going to want to share this with somebody else.
I want to show them what I found out. And I got to forget this in a year. So I want to forget Germany here when I found out. But then I don't want that to be a separate artifact somewhere else. Like, as I go along, I'm writing little functions, right?
So initially, these two lines of code would have been in their own cell, that that would have been, oh, okay, that's how you open a notebook. Let's make it a function. And so I can chuck a def on top and give it. And you're also articulating your understanding. Yeah, exactly.
And then it's like, oh, I think it ought to give something like this. And I check and it's like, oh, I did give that. And so now I've got a test of my understanding and the API. And I've got to check that it's going to be consistent. And so that becomes a test.
So let's actually have a look at this. So here is the notebook which creates notebooks, which creates nbdev. So here's notebook number one. And so we can then look at the documentation for nbdev. Because writing documentation, like most people don't really do it. Yeah, yeah. Well, that's what I was saying, that the whole reason that Coldview succeeds is because we wrote documents.
Right. So you'll see that my documentation here is the same thing as the source code. And that's because source code and documentation and tests, they're all in the same place. And this is like, this is kind of in some ways a lot more than just literate programming. It's what I call exploratory programming.
And it's this idea of like trying to recognize that programming is a process done by humans and that we can support humans doing that process by giving them tools that fit that process. So that's really what nbdev is all about. And it's not a new idea. So obviously Knuth was the guy who kind of created the idea of literate programming, combining programming language with the documentation language.
And these ideas that programs should be more robust, more portable, more easily maintained, and also more fun to write. All things I found to be true. When I'm writing code like this, I tend to be in the flow zone all the time. Because every line of code that ends up in a function, I've run it independently, I've explored it, and I've played with it.
I know how it works. So I don't have many bugs. And if I do, they're ever weird bugs I don't understand. So I'm always progressing. So then Brett Victor, who I really admire, talked about a programming system for understanding programs. And he has some amazing examples of what could programming look like in a way that's much more exploratory and playful.
And so then another thing which was fantastic, my friend Chris Latner built Xcode Playgrounds, which again, it kind of lets you see what's going on, you know, how many times it's going through the loop, and what does it look like. So there was a lot of like, and of course, small talk, you know, small talk was explicitly designed for exploration, like, it's, you know, you have this whole...
I was going to mention small talk in my file question. So that's great. Yeah. So there was all that going on. And then perhaps most relevant Mathematica, which really developed the idea of the notebook, and I really always enjoyed working in Mathematica. But never enjoyed not being able to do anything with it, because there just wasn't a great way to like, take a Mathematica notebook and give it to somebody else to play with.
Yeah. Yeah. So when Jupiter came out, I felt like, oh, this is a good opportunity to take these good ideas and turn them into the thing I've always wanted, which is a way to build real software, real documentation, real tests, but in this exploratory way. So that's what nbdev is.
So you write your software in notebooks, and you basically, you know, run a cell or a CLI command, and it exports it to a module. And that module in Python, and that module automatically ends up on PyPy. So you can pip install it, you can condor install it, automatically gets the documentation website, automatically gets continuous integration tests.
So somebody who actually just tried using this the first time, a couple of days ago, told me from zero to having a website and module and continuous integration done, it was 10 minutes. Yeah, I believe it. And that's what you want, right? Because it's like, you know, you want to be to say like, oh, I brought you a little tool.
Here it is. There's the website, you know. And then when I get like, pull requests, you know, they're generally good, because they wrote them in the notebook. So they can see exactly what it's meant to be doing, they can see the tests there, there's like, they don't forget to write tests, because they're in the same place, they don't forget to write documentation is in the same place, they understand the context of what it's about.
So I also find it helps, you know, with open source collaboration as well. Now, I will say the tooling we built it on top of, which is largely kind of nb-convert and stuff, the kind of the surrounding toolset around notebooks, I was never fond of. I found it a bit slow and a bit clunky.
I'm very grateful that open source volunteers built that stuff, but I didn't particularly like it. So then, when I came across quarter, well, the first thing I noticed was like, oh, this looks like nb-dev, like you guys are actually using cell comments. Which we got from which we took from you from fantastic.
Because we were struggling with attaching metadata to cells. And as you know, notebook editors have a facility for that, that is hard to find and requires you to edit raw JSON. So he said, well, that's not good. And so he said, and I saw you do that. And I was like, because people are using tags, they're also using tags.
Absolutely. You know, and I was like, well, even the tag interface is really clumsy. And so I was just like, why not the comment? You know, I saw you. Exactly. But you guys do it better, because I saw yours and yours were like comment followed by a pipeline. And I had always kind of struggled with his idea of like, how does anybody know whether something in nb-dev is a comment or a directive?
So you made that explicit. And I kind of thought, I wasn't surprised, you know, because I kind of thought like, okay, JJ, I've always admired this guy's work. And he's now taken, you know, I don't know if it's now I know it's intentional, but I didn't at the time, it's intentional or not taken my work and made it better.
And that's always, and I thought that's great. We should, we should at least use that syntax. Yeah, sure. And then I started looking into like, at what you're doing with it. And I thought like, Oh, no, this is like a whole tool set that does everything nb-convert does and a lot more.
But it's also more delightful to work with, because it's got much better documentation, it's got much better defaults, it's, you know, the stuff that's built in for free is much better. And then when I spoke to you, because I kind of said like, to you, like, you know, this feels like something I could build nb-dev2 on, tell me a bit about the technical foundation, like how is this working?
And you explained to me, and I started reading the source code to understand it, that it's actually this like relatively thin wrapper around fantastic functionality that already exists in Pandoc. That's right. It's an orchestrator. Yeah, which, you know, on a bunch of good defaults. So like, it's kind of like what fast.ai is to PyTorch, in a way.
Right. It is this amazing foundational technology that's actually just too hard for people to get their head around. Yeah. Let's give you, like you said, good defaults, good ergonomics, you know, and it's the same sort of thing. But also, Pandoc, I had so many problems with it. Like, you know, when I used it, it just very often didn't quite work, you know.
So you've also like, just made sure it works. Like, oh, you know, that's unfortunate. Okay, make sure that works. Yeah. So nbdev2 is basically like, should look very, very similar to nbdev1 except for the pipes after the comments. But it's dramatically faster. Partly because, well, partly because I wrote a lot of stuff myself from scratch by using the Python AST.parse stuff.
So I'm working with the abstract insects directly. I'm making sure I only have a parse at once. I reuse the cache to AST, you know. And then partly because, you know, we leverage quarter, which is much faster than nbconvert. So it's much faster. And it's kind of the code base, even though it does a lot more, it's a lot smaller, you know, than nbdev.
Again, by kind of like trying to build better foundations. Well, the interesting thing, I noticed the title of your blog post was Use Notebooks for Everything. And I, one thing that would be interesting to explore, so I kind of came up through this interactive computing metaphor, which was really defined by, have you heard of ESS, the emacs speak statistics?
That was sort of this emacs mode for R and S, actually, originally, and then R. And it was like, one of the things that it sort of said, you want everything to be interactive and responsive, and you're always in a live session. The way they achieved that was through, rather than having a notebook, they did line by line execution.
That's like the fundamental model is I select a line or a group of lines, and it can be smart syntactically, like, oh, I see the line continues. And you just edit lines, basically. And then at some point, you might, like you did, reorganize that into functions and so on and so forth.
And so one of my questions was, and I think one of the most delightful and powerful things about notebooks for Python is that they give you this interactive development experience. I sort of see it, and you know, Smalltalk gives you an interactive development experience with yet another kind of way of organizing the interactive development.
And so, you know, one of my questions is, and so we are building now, as we build tools, we have this tradition from R of this ESS drive, kind of like line by line execution. You see your side effects, maybe in another pane or in a console. And then we have notebooks, and we're sort of trying to do tooling for both.
And one of my questions is, how much of what's amazing about notebooks, like, so there's multiple ideas wrapped up in notebooks. There's everything in one place, there's bundling output and, you know, and then there's interactive computing experience, and there's immediacy. Like, there's the thing that a lot of people hate, which is also state.
And the state, right. And that's a side effect of it's all trade offs, you know, and the state, you know, so it's like, which, which I think of is actually part of what's excellent about notebooks, if you know how to leverage the state, it's actually, if you know how to leverage the state.
Yeah, I mean, so it's like your file system, you know, your home directory, that is state, that's also when you CD into something, and you copy something, you know, it's, it's, it's, it's state. And this is your home, you know, you made this box, you created a side effect, and it happens to be a, you know, a model or a data set, it's like, this is what you, this is, you've created, I have it now.
Yeah, you've created this environment to be in a state that you want it to be. Yeah, and we have. Yeah, it's funny, because we have some religion, you know, in our like, well, you need to, you need to, it's like, you need to be able to execute the thing from top to bottom and have it work every time.
Sure. And so, but that, but then there are people who say to you, well, I don't really want to do that, because I actually, this was really expensive, you create this piece of state. And I don't so much want to have to bottom, you know, so, so, you know, there's, I think there's a little bit of people have tried to build, you know, the sort of way to split the difference.
It's funny, when I, when I first encountered these ideas, I was like, wow, it's so messed up that there's all this state, I was like, Mathematica must have some solution for this, I went up to, I was at like, some conference, and I walked up to I said, how do you guys do this?
Like, we don't, we just, you just execute, you know, it's like, okay, because it turns out, you know, if you want to solve that problem, it's its own quagmire. And people have reactive notebooks that essentially do solve the problem, but then are really painful to work with interactively, because as soon as you're doing anything that takes more than 10 seconds, you're now.
Yeah, so can I tell you, yes, I'm happy. I can tell you a bit about my thoughts about, you know, that would love to that. So that's like the set the table of like all the stuff that's out there. And where do we go? Yeah. So so a lot of people are very into line by line based approaches in Python as well, particularly using the the IPython REPL.
Yep. Yeah, so. And it looks basically identical to how people coded in an APL 50 years ago, except they used a teletype, you know, and it's based on that idea. And, you know, APL kind of invented that way of working. And, and APL was more than just a programming language, because it was your REPL, that was also how you would like text chat, there was an APL command for that, you know, like everything was, that was your, that was your OS, if you like.
And there's nothing wrong with that. But we have, you know, there are there are other ways, right? And so a notebook, you can do it top to bottom, if you want to. But you don't necessarily want to, because it's often nice to go back and change something a little bit earlier, to answer the question, I wonder what happens if, right?
And so you change that, you select the four cells underneath, and you hit shift enter to run those four cells, it's like, Oh, well, what if I did this? And, and then you kind of think, okay, let's try three different versions of that. So you copy and paste those three cells twice, and then you select them, and then you run those with two different versions, and then you compare, you're doing experiments, you know, and the artifacts of those experiments are right there, all in front of you.
And that doesn't mean that then you're finished, right? Like, hopefully, you've learned something with that, that you're finding your understanding of the problem, right? So then you kind of package it up a little bit, you kind of say, Okay, well, for somebody reading this notebook, I want them to see these three different versions.
And so like, maybe you put it into a little for loop, or maybe you create some kind of function to display it and put it on a graph or whatever. But it's, you know, for me, like there, there are two critical, critical keyboard shortcuts in notebooks, shift M and control shift hyphen, shift M merges two cells together, and control shift hyphen splits them apart.
And so I'm always like, grabbing a single line of code, I'm running it, I'm exploring it, I'm, you know, assigning it to something I'm trying to change fiddling with that. And after a while, I've got three lines of, you know, normally almost my functions are three to four lines of code, I've got the three lines of four to four lines of code that do that thing.
And I just shift M a couple of times, you know, indent the block underneath the death at a doc string. And then all those examples, they're all still there underneath. And so I had some pros before each one. And that's a nice way of working. Yeah, yeah. And like, and as you say, particularly in deep learning, like sometimes I'll be like, Okay, well, I want to show how we can interact with like, a language model.
All right, let's run this for 10 hours. You know, I come back in the morning and I've got a language model just where I want it. Yeah, you know, I mean, maybe that's not a great example, because I probably serialize that as a pickle file or something. But yeah, well, not necessarily want to run everything all the time.
Yeah, an hour or 30 minutes might be. Yeah. Just make the point just as well. I think there's an issue, which is, it reminds me of my time in spreadsheets, you know, I have a huge fan of spreadsheets, even though a lot of people use them badly. Yeah. And I read a book 30 plus years ago, which is a book of spreadsheet style.
And it was designed to be like, you know, what's that English style book? It's designed to be kind of like, you know, rather than grammar and style of English. It's kind of like, oh, sure, for spreadsheets. Yeah. And yeah, it explained, like, here's how you add careful auditing, error checking, self documentation, whatever the spreadsheets.
And so ever since that, that, you know, I've tried to follow these rules so my spreadsheets. Yeah, it's, it's taking a very flexible tool and using that flexibility to create a process for using that tool, which works really well. Same with notebooks. If you, yeah, you can shoot yourself in the foot with them, but that doesn't mean we should tell people not to use them.
Yeah, you should help people. You can shoot yourself in the foot with a .py file or sitting at the ipy file. Or a C++ file. Or a C++ file, definitely. So yeah, so we're kind of adding, like, more stuff, more and more stuff. So something that I've built as part of nbdev2 is something called execnb, which is something which is just a tiny, tiny little Python module that just runs notebooks.
And, you know, you can parameterize the runs, you can, you know, it'll save the results back into the notebook, you know, with this idea that, like, you can very quickly and easily run some experiments, share the results with people. And nbdev repo, I mentioned it creates continuous integration for free.
That continuous integration runs every notebook top to bottom. So if you're on notebooks, don't work top to bottom. As soon as you commit, you're going to find out. You're going to get an Instagram from GitHub. It's kind of harmless. It's harmless to create a local out-of-order notebook because it's going to get checked.
Yeah. So, I mean, yes, you've diluted yourself temporarily, but there's a net. Yeah, exactly. That makes sense. Yeah. All right. So if I can come back to a quarto a bit, JJ, I wanted to understand where you're going with it and why. So you mentioned earlier that scientific programming is broadly speaking something you were trying to, like, improve.
But quarto is not just scientific programming. You've got all this stuff about kind of scientific publishing as well. Yeah. So what are you trying to do with quarto and why are you trying to do it? Well, it's, and I would say it's, quarto is much more a scientific computing, you know, that's what RStudio and Tidyverse and, you know, Arrow and all those projects are about scientific computing.
I'd say that quarto is very squarely about scientific communication. And I would say that there's a few things that just by working in the field for a little while, I have noted that I think like warrant significant improvement. So one is the fact that we have scientific communication for a lot of good reasons is very tied to print.
And that the coin of the realm is these print articles. And that's fine. And there's good reasons for that. And there may even still be good reasons for that in the age of the web, where, where, for example, a PDF is a more durable entity than, you know, a website that might get taken down or have its links break, etc.
But maybe, maybe not. Okay. So what I'm saying is I've certainly seen some discussions where people say it's not a terrible thing to have a self-contained representation of your whatever, better to have like a Docker image that can run everything anyway. But so very tied to print. And so one of the things is to help scientific communication take better advantage of the web while still not losing the focus on print.
So not going completely like, hey, everything in now and in the future is web. But now all of a sudden, I actually can't write an article that I can publish with that with that mindset. So that's one, one piece, another piece, which was huge focus of the R community, which is reproducibility.
And this idea that everything should be in a dot R, you know, in an R Markdown document that runs top to bottom, where your figures and your tables and your, your results and everything is all reproducible and produced by code. And so helping people do that is a big motivator.
So let me come back to the first one, which is about scientific communication, making it more web friendly. Yeah, I guess like, why? Like, what's this got to do with R Studio? Or is this like, what's this got to do with you? Like, what do you, why do you care?
Well, what I was asking you with me was that, to me, my own, my own kind of beginning of the Renaissance was the, the, the Bill James baseball abstract, eyes opened. And then I get to its politics and my mentor is, is, is demonstrate. He's also like, wow, we're making decisions that affect hundreds of millions of people with no evidence or making medical decisions with no, not or no evidence, probably an exaggeration, but really weak under, under rigorously prepared and under-evaluated evidence.
And so to me, it's just like doing science. Well, has a lot of consequences. So this is like, this is a, this is a, this is a, this is a mission for you to do science better. That's right. And John Chambers in his book about, about, about RNS software for data science.
He actually has this concept in there, which I used in all my slides. It's called the prime directive, which is basically like accurate, trustworthy computing of scientific results is the prime directive. It's really important for the same thing, for social policy, for medicine, for, you know, just safety. So that's it.
I mean, I was really compelled by that. So helping people do science really well and communicate technical content and persuasion well is to me very, very compelling. Is there something about like accessibility there as well for you, like making it, like making science more accessible and making scientific publications, like more accessible?
Not per se. I'm taking scientific communication at face value, that it serves whatever purposes it serves and has whatever virtues it has. I'm not, I don't, I'm not saying let's change that. That's not at least my thing, but I will say that another related influence was the, you probably read it, the Tufti has this pamphlet, which is the cognitive style of PowerPoint, you know, pitching out corrupts within, you know, and he sort of breaks down what's wrong with a lot of the way we communicate about technical information.
And he sort of at the end, he says, you know, really what we should be doing is giving each other handouts that have analysis and evidence and data. We should be reading the handouts before the meeting, and then we should be talking about them, you know, not, not pitching, you know, bullets at each other.
So I was compelled by that too. So I was sort of very compelled by the idea, like, let's give people tools to communicate effectively about technical matters and, and science. So that's, that's, that's very motivating to me. So just showing this, this is the Tufti. It's a really great, yeah, they have, there's a really fun, funny thing in there where he says, here's what the, was it the Gettysburg Address would be as a PowerPoint, you know, presentation.
So, you know, it's, you know, similar ideas in, like, how Amazon do things. So, you know, they, they do a six page kind of memo. And of course, also Feynman, you know, in talking about the challenges, space shuttle disaster, felt like a lot of that problem came from complex ideas.
We just saw your, and I think you just posted on your blog about this, the evidence update regarding masks and COVID-19. That's exactly what I'm talking about. Like, let's have a dialogue about a matter of public health importance and use evidence and communicate a commute, do technical communication really effectively.
The reason I asked about accessibility is that, I mean, this, this, so this was an article that me and this team, this team and I wrote in April, early April 2020. So, you know, within a month really of the pandemic taking hold in the US. Well, within a month.
But it wasn't published, I mean, it says here accepted December the fifth, and then I think it was published quite a bit later than that, maybe even. So, by the time this was available on the proceedings of the National Academy of Science, it was almost obsolete, you know. But what we did do was we also put it on preprints.org, where it was there from, here we go, 10th of April.
And these were very minor changes, right? So, and this version has received 439,000 views of the abstract and 98,000 downloads, which is the, by far the most viewed preprints.org paper of all time. And, you know, the fact that that was much more, you know, if we compare it like.
Let me, let me, let me re-answer a question, because when you say accessibility, I read that as the accessibility of the discourse. Can a lay person understand this? And that's not per se a goal, but it is. But accessibility in the sense of the way scientific publishing works, and the delays that are inherent in the progressive refinement of knowledge, and the various choke points that there are for publishing that gives people credit for their careers.
That is all kind of pretty messy. I don't have good ideas about personally about how to resolve that, but a lot of people do. And a lot of people are working hard at that. And so it is motivating to me to build, if I could build a tool that's widely adopted for scientific communication, that I can marry that to good ideas that are out there, and easier to adopt.
I mean, that's kind of why I asked, because, yeah, like, that's kind of the number one goal, even though we got to a third, but that's a hope that I have. Because like, I mean, so I, you know, to be clear, I hate thinking about talking about writing about or learning about masks, I find them tedious and annoying, but, you know, I have to, because other people aren't.
And so I just, you know, I updated that paper quite recently. But I didn't put it on a journal, I put it on our website, because I felt this is more accessible. And also, because like, I just couldn't be bothered, like doing all that latex stuff, and real links to real, you know, anybody can click on it and go there.
This is a goal we have, which we haven't, it's not evident yet, we're working on it, is that you should be able to basically create a blog like this, that's got this content, but and take this and repurpose that same content and send it to the journal. Exactly. That's, that's exactly what I want to do.
It's like single source publishing, where you can be, you can almost be web first. And then, oh, look, we also know how to make, make LaTeX that you can submit to the, to the other places that you need to get this published. And I can show you how horrible this looks nowadays.
So I did exactly that for I did a paper about vaccine safety with my friend Yuri Manner. So Yuri wrote a study, or was a senior author on a study, which for whatever reason, got picked up by the conspiracy theorists, well, as showing that vaccines are harmful. And so him and I got together to write a paper that said, basically said, here's what that paper actually says.
So this is the paper here, LaTeX 2021. But again, after, you know, we actually wrote this probably in about April 2021. And in the end, I just, nobody had yet reviewed our submission. And so in October, I just kind of went, oh, fuck it, I'm just putting it on the web.
So I had to take that LaTeX document and turn it into web. And I did use pandoc to help me. But as you can see, we end up with these like, oh, yeah, kind of references that I had to kind of paste them down at the bottom. And then here, let me show you go to the GitHub, or this will be out by the time this video broadcast, there's GitHub org called quarto dash journals.
And this should show you yeah. So basically, we're working on journals. So you can see like, you know, if you go to one of those, let's see what it shows you. Yeah, so scroll down. Anyway, it's not it's not showing you but go to that go to that template that qmd file there.
And you'll see sort of an example of, you know, you've got, you know, your metadata, your authors, your, you know, all the stuff you need to do, it's making the LaTeX that the journal wants, including get off and getting all the fiddly bits, right. But then the same exact content is going to render perfectly in HTML.
That's great. It's gonna do everything that is going to do everything right. So that's I think the idea is, let's just write in quarto. And now we're going to be able to put it in on on the web, maybe web only, you know, but also that world of publishing my god, I was so shocked when I discovered how it works for this for this penis thing.
And if the penis is, I think, like the third highest impact journal in the world. And so, you know, I thought like, oh, this is going to be a smooth professional experience. And, you know, I did the whole thing in Overleaf and LaTeX and bibtech and just fine, it was pretty easy.
And thanks to Overleaf, you know, with all 19 of us authors could collaborate by working on different sections. And so then when it came to publishing it, you know, I had to upload the rendered PDF. Okay, so I uploaded the rendered PDF, I wasn't quite sure how that was going to help them.
And then, you know, like, a while later, yeah, they contact me and say, like, okay, we now need you to like, look at these questions. They were basically they put annotations in the PDF, which is already kind of hard to work with. So I ended up trying to like reply in the PDF to the and then eventually they're like, okay, now you have to go through and look at the kind of camera ready document, whatever, and look at these things.
And they sent me back a Word document. And they've taken the whole thing and redone it in Word. Yeah, just wait for it. So then, so then they're like, okay, they had a question about a reference. They're like, maybe this reference doesn't really make sense there. I think they said you're not allowed to use it because it violates some rule or something.
I was like, I don't want to fight about this as far as this, fine, you can get rid of it. And they're like, okay, so what you need to do is remove that, then renumber all the references afterwards. Exactly. There's 150 references. And this is reference. Well, this is what yeah, this is what like a proper, you know, scientific markdown system will do that.
We'll remember everything. So I just said, I just said no. I've made that change in the late tech is the PDF with the correction. Yeah, you fix it. Well, I almost view it as like, you have to give people tools that help them with the problems they have now.
And, you know, which is I need to interact with all these journals and publishing systems. And then you have a chance to help them, you know, evolve what they do and help them do things they never thought were possible. So I think that's one of the reasons like we are focused on really tooling late tech well and letting you know, we're very focused on that, even though we think, wow, it should be great if we didn't have late tech, we're not we're not ignoring it, we're saying, okay, we'll tool that and but we'll also tool the web.
And it'll be great. And we'll all get there eventually. So that's now can I ask some, I'm very, very excited about this, by the way, I love that, like, this is something I'm passionate about. Yeah, it like a kind of a slightly weird way in that like, I'm passionately anti how academia works to the level that everybody was assuming I would go into academia following school, and I refuse to know the basis that I didn't like how academia worked.
Yeah. And I've now finally come full circle. I am actually a professor. But, you know, only because I'm able to do it on my terms, and I totally refuse to do any of the normal things. So that's, it's great to have you involved in this fight. Yes, we are going to be very involved.
We have a couple of questions from the community. So, okay, so this one is actually asked to me, but I wouldn't mind asking it to you as well. And then I can come back to myself. This person said, for Jeremy, your productivity amazes many people, including myself, do you have any tips that might be valid in general?
What does your usual day look like? Now, I feel the same way about you, JJ, I'm amazed at what you've done and what you do. And Hamill and I are both, you know, like, well, how does JJ do all these things so, so quickly? So, yeah, I'd love to hear your well, I would say that, to me, the main lever for productivity is not how fast you can code that that certainly helps.
I think it's more what problems do I choose to solve and, and what order and at what level of depth, you know, to me, getting through a problem, or a problem domain is about making those choices. And there are side, side quests you can go on that waste three times the total effort required to actually solve the problem.
So I think that a lot of that just comes from experience. So I think there's the choosing what problems to work on. And that I think you can you can you can level yourself up by talking over what you're planning to do with other people. And so I was thinking of trying to solve this and then this, and then they say, huh, well, why is that important?
Isn't that only important to this? And couldn't you do, you know, so I think some dialogue helps inner dialogue is great. If you have a lot of experience, maybe you can get it done with mostly inner dialogue, but talking to people, I think tactically, so that I think then there's just throughput, how much, how much code, how many features can you write?
And I, to me, the biggest thing is just, you know, several hours of completely distraction free time. So you kind of like turn off any notifications, turn off notifications, build up a stack, get your stacks, you got to get like a proper head of steam and not let your beat self be distracted.
Do you work at an office or you work from home? I do, I do work at an office. Yeah. Yeah. And I found that to be to be helpful for that, for that purpose. I do, I do have a good setup for working at home too. And it's separate enough from the rest of the house that I can, I can approximate that pretty well at home too.
But, but yeah, so I feel like, you know, I need to get four or five, six hours chunk of distraction free time. So then it helps just to batch up things like, okay, and you can even batch up things by the day. Monday, I'm going to do all the fiddly bits and distractions and calls and, you know, or Monday and Tuesday, I'll do that.
And then I know Wednesday, I have nothing scheduled at all, Wednesday through Friday, and I can get to good focus. I mean, that is significantly more hours than most experts in creative fields. So they can achieve like normally four hours seems to be considered about what you can aim for as a best, like five or six is fantastic.
Yeah. Yeah. Is that because they're because of just, just sustaining concentration? Yeah. Yeah. And it's not just in like, I mean, it's like in, in, in, yeah, like the, the kind of deliberate, you know, deliberate practice stuff. Yeah. Yeah. Yeah. It's kind of what you're doing as well. Deliberate practice is normally what no, no, that's just a helpful, helpful genetic attribute that I have.
Yeah. Yeah. Yeah. I can't do that. You know, I, I very rarely could do four hours. I, you know, three is good for me. You try to get the three hours distraction free or? Yeah. I mean, and also, so I mean, my, my, my main thing by far is, is a deliberate choice I made as an 18 year old to spend on average, half of every day learning or practicing something new.
Yeah. Yeah. Yeah. Which is, yeah, it drives everybody I work with crazy pretty much because it makes you a very, very creative inventive, able to see around a lot of corners and solve problems in ways that people. Yeah. And I know, I know tools extremely well. All the keyboard shortcuts and all the tricks and whatever, and all the libraries.
But it does mean, yeah, people are working with me and are like, okay, we're going to have this thing finished by Friday and you're, you know, learning. You're doing this programming language for no obvious, like what they need to look at it along the long view. And there's also means like, you know, very often using a tool I'm not very familiar with to do something, even though it would be five times faster to do it manually.
Yeah. But yeah, like, it's definitely got me to a point now where I find, you know, the vast, you know, nearly everybody I work with, I, I just get things done, you know, often 10 times faster and it tends to work the first time. And I kind of often find what I do live coding or whatever people are like, oh, I didn't know that tool exists.
So I didn't realize you're always looking for that way or an efficiency and yeah. And then so I think, yeah, I think something people would be surprised about with me, if people think of me as productive, how few hours of productive time I have a day, I spent a lot of time hanging out with my daughter and going for a walk on the beach and eating ice cream and, you know, like try to be in a good mindset to have a good, a good three hours.
It's a very, very good three hours that you have that. Yeah. Not many people have good three hours that often. That's right. No, I, I see that a lot of people that I work with their days divided up into small bits and there's probably not three hours of even of engineering in there and they're all broken up.
And yeah. And it's also a case of being good at saying no, like I very rarely do meetings. And if I do, I want it to be a good one. Like, like this, you know, like talking to somebody I really want to talk to you about things I really care about.
And so generally somebody's like, can I get on your schedule for a half hour phone call? I'll say no. But, you know, if you send me some email, I will respond, you know. Yeah. Okay. So, you know, yes. So your brother apparently does some rapping. Somebody else wants to see you doing some rapping, JJ.
That's not going to happen. Okay. So both, both of us, when making design or development decisions regarding nb dev two and quarto, were there any trade-offs you struggled with? Yeah, I would say two, two trade-offs. One was going back to the discussion we had earlier about leaky abstractions, how leaky an abstraction over pandoc should be, because our markdown actually fully, it's pretty fully abstracted pandoc.
Like you just used all these R functions and you didn't even know pandoc was there. And if any given piece of functionality needed in pandoc, you know, you needed to address, you need some hacky way to work around the fact that we've written this wrapper. And so for quarto, I went with a more leaky abstraction, which basically says like everything that's in pandoc is kind of there pass through.
Partly that's because pandoc had evolved. It used to be that it could only accept a lot of things by command line parameters. And so now it can take everything through YAML. And so like, it became a system that you could interact with more reasonably without a special wrapper. And so, you know, that I felt like if we decided to try to wrap it, it was going to be kind of a losing game, trying to keep up with everything people were trying to do was by making it leaky, we would sort of pre-roll on everybody's knowledge of pandoc and all the things that are in pandoc.
So that was one. And the other one, I think, which we didn't really decide on until about a year into the project, was how much we should be batteries included, or how much we should be sort of extension and plugin driven. And you know, extension and plugin driven can be very dynamic, you know, like the JavaScript ecosystem just like keeps evolving every three months.
And it's always, you know, on the other hand, it's really hard for people to get their bearings and, and, and things get. And so we went on batteries included, because we felt like we actually it was a somewhat bounded problem. There was a bunch of it was sort of known what the we looked at a bunch of systems said, it's a known feature set.
And the users are not JavaScript engineers, they're analysts and scientists, they will appreciate batteries. I would say as a user, I've definitely appreciated that. I, yeah, I don't want to spend my time figuring out how to add a JavaScript based syntax highlighter and a JavaScript based table of contents and exactly how to modify the CSS to create a collapsible sidebar.
I mean, nobody, yeah, nobody wants to do that. I mean, everybody needs all those things. So just, you know, you give it to me, but you do a good job of making sure I can replace it if I want to. And there are plenty of things that I've wanted to replace.
And, and, you know, so very kindly, one of the first things you did for us was you added the IPYNB filter directive, where we now have a Python script that takes us, turn it in a notebook, and feeds back has turned it out a notebook into being modified. And by using that, we can totally do anything we like between that and the Lua filters on the AST.
Yeah, there's nothing we can't do. Yeah, we just introduced recently, sort of a plot extensions, which is basically, it's Lua. Yes, they're installable, they're kind of easy to bundle. And so that's a nice, a nice. Yeah, Hamel and I were talking about that this morning. Alright, so if I answer to this question, I think, you know, the main one is actually not just about nvdev, but kind of everything we do, which is in Python, there's a schism between treating it as a kind of a static language that you write a bit like Java, versus a highly dynamic language that you write a bit like Lisp.
And in my opinion, Jupyter is best for the latter. And in general, I like writing code using the latter approach. I like to, you know, I like exploratory code where I'm manipulating objects and taking advantage of metaprogramming and dynamic features. The Python community has very heavily leaned in towards the former, you know, so static typing, and a lot of very more enterprisey approaches to testing and documentation and lots of single use tools with their own concepts to learn and stuff, you know.
So that's the big trade off we've made. Is to basically opt out of the usual way of doing things in Python to the extent where we're always starting to think, like, should we describe this as a different dialect of Python? Because it's not particularly recognizable to... And you wouldn't want people to expect, oh, I can just pour in all the stuff that I'm already using.
Well, I mean, it interacts with it all fine, but you write it in a different way. So like, if you're used to using VS code and very heavily relying on static type annotations, you're not going to love our libraries because they're so dynamic that VS code doesn't generally know what the hell is going on.
It just kind of gets confused. So, whereas in Jupyter, Jupyter always knows exactly what's going on because it can do real time introspection of the symbols. And, you know, this is something you've got to say about the Python community. There's this kind of basic principle that comes from Greedo, the original developer of Python, which is that ideally there should only be one way to do it.
And I don't understand how this ever became a thing, because as soon as you say that you basically turn off innovation, because if you want to do something better, you're not allowed to, because you've just created a second way to do it. And so the Python community often is, you know, or at least this kind of core group is often quite anti fast AI stuff, because we're a second way to do it for all values of it.
You know, we have a different way of testing, we have a different way of building libraries, we have a different way of doing types, we even have like a different way of, you know, we have a Julia inspired type dispatch system, like we do a lot of stuff inspired from non Python languages.
And, you know, I think that's really problematic, whereas our seems to really, this seems a much more flourishing, welcoming and diverse community than the Python community does feel that way. And there's a lot of different there's a lot of variance in how people do things. And it's generally accepted.
I would say, yeah, yeah, there's a lot of stuff where people are, people are always finding new ways to use the languages dynamic features to do express things differently. So yeah, yeah. So, you know, there's a lot of things I don't love about our and there's a lot of things I do love about our, you know, like you, I came out of the, you know, SAS, SPSS, Excel world, we used s plus, you know, back before I was really a thing in the previous startup.
That was my world for so many years. And I wouldn't say I wouldn't go back to it. I don't, I like the language of Python more, but there's a lot of stuff I wish I could have, you know, everything that Hadley has written and the community, the documentation, the formula language.
All right, we got one more each, if that's okay. Okay, sure. That's great. Yeah. All right, JJ, nb dev two is built on top of quarter. Do you have any other thoughts for stuff that might be able to build on top of quarter that would be interesting to I think a couple classes of thing and nb dev two is an exemplar one, which is I think of it as sort of generation of web content from software artifacts.
So I have a software artifact. In this case, I have my notebook that defines a bunch of functions and exports things. And I can generate a website from that you could think of, you know, it has real time elements. So it's not a perfect analog, but like TensorBoard, there's these artifacts created in a directory, and then they create this web experience from it.
And so I do think there's a lot of things in the Bioconductor project had this thing pre R markdown, but they have these S4 objects that were very complicated, they could have like gene sequences in them and all kinds of stuff. And if you just literally get you call it a function, pass the object, it makes a website from it, you know, so I think that this idea of having different types of software artifacts, and then just creating websites from them is really interesting.
And obviously, like documentation for a software package is one variant of that, but there are other ones. And the other is, you know, we sort of promote, hey, look, you can make a website, you can make a book, but you can pretty much you can feed just about any publishing pipeline through, you know, from notebooks through quarter into the publishing pipeline.
So like, you know, you if you've got a big Hugo website, you can you can pump markdown into that, or you have you're using confluence, and you need to put all your articles there, you can pump things. So start building these publishing pipelines downstream of Cordo to these other because, you know, it's great that you can easily make a website, but oftentimes you need to get it, you need to get your content somewhere else.
And so, you know, hopefully, we can teach people how to do this, how to do this. I mean, it's all possible. I remember you guys asked about Docosaurus. And I was like, Oh, here's an example, you can totally like feed a Docosaurus site with Cordo. You know, I know how to do it, you know, I got to teach other people with you.
Yeah, yeah. Yeah, great. And so then, my last question was, Jeremy, NBDev made literate programming in Jupiter feasible. NBDev2 improves upon that even further. What are some open research slash exploration areas that could help improve literate programming even further in the future? That was one of my questions, too.
So that's good. All right, so I'm just gonna totally hand that over to somebody else much smarter than me, who's thought about it for long than me, which is Brett Victor. So Brett Victor has his talk from 2013 called The Future of Programming. And Brett talks about this idea of coding being, you know, trying to work with a direct manipulation of data.
And so I think to me, you know, as I say, it's not so much about literate programming, it's about exploratory programming. And Brett's given so many great examples of directly manipulating things to code. But he actually shows his examples from the 60s, like Sketchpad, where Ivan Sutherland was directly drawing things on a display, believe it or not, to like create constraints and or to create automatic drawings.
69, a prologue based approach to kind of describing what you want. Pattern matching, Doug Engelbert's ideas from 1968. Again, like all like manipulating things on screen directly. Rand Corporation's grail of like building things up in this way. And of course, we've talked about small talk. And yeah, it's all interactive responses.
And so people like Brett and Alan Kay talk about how we've somehow, you know, lost our ability to, you know, write things in like in environments that are more like this, you know, I mean, there's a classic example from Brett Victor, where he's designing a computer game, like a Super Mario style computer game.
And he sets up this kind of time travel debugging type system, but it's actually shows you the exact way what would happen if somebody pressed the buttons you pressed in your game just now, and like shows you where the characters would all end up. And he like, modifies them in real time, and you see them moving.
Yeah, this is like what it should feel like to work with code is it should feel like this artisanal real thing. We're pretty far. It's funny, we know books are great. And data science rebels are great. They are, they're probably like 15% along the way that they need. I'm pretty excited about working on those problems too.
Brett had also a great example of he had this award winning iOS app for basically the train schedule, the bot schedule in San Francisco, and he showed this example in one talk, where he describes how you could have written the whole app entirely using a kind of graphical object system that's just totally unlike any coding that I've ever seen.
Yeah, yeah. Yeah. Well, thank you, JJ. I appreciate it. Our two way AMA slash conversation. I think I did just reinvent the idea of a conversation. You have to see if you're gonna if you're gonna if you're gonna promote it as a two way AMA or conversation. Yeah. All right.
Well, good luck with the last couple of weeks up to the launch. Yeah, absolutely. Well, thanks. And we're going to be launching right around the same time. So exactly the same time. It'll be fun. All right, mate. Take care of the rest of you. Bye. Thank you.