Back to Index

Luís and João Batalha: Fermat's Library and the Art of Studying Papers | Lex Fridman Podcast #209


Chapters

0:0 Introduction
2:22 Backstories to research papers
17:13 Fermat's Library
37:14 Scientific publishing
60:54 How to read a paper
66:48 Taking good notes
75:27 Favorite papers on Fermat's Library
116:18 Fermat's Library on Twitter
125:50 What it takes to build a successful startup
134:46 Game of Thrones
137:34 Realism in science fiction movies
143:33 Greatest soccer player of all time
166:22 Advice for young people

Transcript

The following is a conversation with Louise and João Botala, brothers and co-founders of Fermat's Library, which is an incredible platform for annotating papers. As they write on the Fermat's Library website, "Just as Pierre de Fermat scribbled his famous last theorem in the margins, professional scientists, academics, and citizen scientists can annotate equations, figures, ideas, and write in the margins." Fermat's Library is also a really good Twitter account to follow.

I highly recommend it. They post little visual factoids and explorations that reveal the beauty of mathematics. I love it. Quick mention of our sponsors. Skiff, SimpliSafe, Indeed, NetSuite, and 4Sigmatic. Check them out in the description to support this podcast. As a side note, let me say a few words about the dissemination of scientific ideas.

I believe that all scientific articles should be freely accessible to the public. They currently are not. In one analysis I saw, more than 70% of published research articles are behind a paywall. In case you don't know, the funders of the research, whether that's government or industry, aren't the ones putting up the paywall.

The journals are the ones putting up the paywall, while using unpaid labor from researchers for the peer review process. Where is all that money from the paywall going? In this digital age, the costs here should be minimal. This cost can easily be covered through donation, advertisement, or public funding of science.

The benefit versus the cost of all papers being free to read is obvious, and the fact that they're not free goes against everything science should stand for, which is the free dissemination of ideas that educate and inspire. Science cannot be a gated institution. The more people can freely learn and collaborate on ideas, the more problems we can solve in the world together, and the faster we can drive old ideas out and bring new, better ideas in.

Science is beautiful and powerful, and its dissemination in this digital age should be free. This is the Lex Friedman Podcast, and here's my conversation with Luis and Joao Batalla. Luis, you suggested an interesting idea. Imagine if most papers had a backstory section, the same way that they have an abstract.

So, knowing more about how the authors ended up working on a paper can be extremely insightful. And then you went on to give a backstory for the Feynman QED paper. This is all in a tweet, by the way. We're doing tweet analysis today. How much of the human backstory do you think is important in understanding the idea itself that's presented in the paper or in general?

- I think this gives way more context to the work of scientists. I think a lot of people have this almost kind of romantic misconception that the way a lot of scientists work is almost as the sum of eureka moments where all of a sudden they sit down and start writing two papers in a row, and the papers are usually isolated.

And when you actually look at it, the papers are chapters of a way more complex story. And the Feynman QED paper is a good example. So, Feynman was actually going through a pretty dark phase before writing that paper. He lost enthusiasm with physics and doing physics problems. And there was one time when he was in the cafeteria of Cornell, and he saw a guy that was throwing plates in the air.

And he noticed that when the plate was in the air, there were two movements there. The plate was wobbling, but he also noticed that the Cornell symbol was rotating. And he was able to figure out the equations of motions of those plates. And that led him to kind of think a little bit about electron orbits in relativity, which led to about quantum electrodynamics.

So, that kind of reignited his interest in physics, and ended up publishing the paper that led to his Nobel Prize, basically. And I think there are a lot of really interesting backstories about papers that readers never get to know. For instance, we did a couple of months ago an AMA around a paper, a pretty famous paper, the Gans paper with Ian Goodfellow.

And so, we did an AMA where everyone could ask questions about the paper, and Ian was responding to those questions. He was also telling the story of how he got the idea for that paper in a bar. So, that was also an interesting backstory. I also read a book by Cedric Villani.

Cedric Villani is this mathematician, a Fields Medalist. And in his book, he tries to explain how he got from a PhD student to the Fields Medal, and he tries to be as descriptive as possible, every single step, how he got to the Fields Medal. And it's interesting also to see just the amount of random interactions and discussions with other researchers, sometimes over coffee, and how it led to fundamental breakthroughs in some of his most important papers.

So, I think it's super interesting to have that context of the backstory. - Well, the Ian Goodfellow story is kind of interesting, and perhaps that's true for Feynman as well. I don't know if it's romanticizing the thing, but it seems like just a few little insights and a little bit of work does most of the leap required.

Do you have a sense that for a lot of the stuff you've looked at, just looking back through history, it wasn't necessarily the grind of like Andrew Wiles or the Fermat's Last Theorem, for example. It was more like a brilliant moment of insight. In fact, Ian Goodfellow has a kind of sadness to him almost, in that at that time in machine learning, like at that time, especially for GANs, you could code something up really quickly on a single machine and almost do the invention, go from idea to experimental validation in like a single night, a single person could do it.

And now there's kind of a sadness that a lot of the breakthroughs you might have in machine learning kind of require large scale experiments. So it was almost like the early days. So I wonder how many low hanging fruit there are in science and mathematics and even engineering where it's like, you could do that little experiment quickly.

Like you have an insight in a bar. Why is it always a bar? But you have an insight at a bar and then just implement and the world changes. It's a good point. I think it also depends a lot on the maturity of the field. When you look at a field like mathematics, like it's a pretty mature field.

A field like machine learning, it's growing pretty fast. And it's actually pretty interesting. I looked up like the number of new papers on archive with the keyword machine learning and like 50% of those papers have been published in the last 12 months. So you can see just the sense- Five zero?

Five zero, 50%. So you can see the magnitude of growth in that field. And so I think like as fields mature, like those types of moments, I think naturally are less frequent. It's just a consequence of that. The other point that is interesting about the backstory is that it can really make it more memorable in a way.

And by making it more memorable, it kind of sediments the knowledge more in your mind. I remember also reading the sort of the backstory to Dijkstra's shortest path algorithm, right? Where he came up with it essentially while he was sitting down at a coffee shop in Amsterdam. And he came up with that algorithm over 20 minutes.

And one interesting aspect is that he didn't have any pen or paper at the time. And so he had to do it all in his mind. And so there's only so much complexity that he can handle if you're just thinking about it in your mind. And that like when you think about the simplicity of Dijkstra's shortest path finding algorithm, it's knowing that backstory helps sediment that algorithm in your mind so that you don't forget about it as easily.

It might be from you that I saw a meme about Dijkstra. It's like he's trying to solve it and he comes up with some kind of random path. And then it's like my parents aren't home. And then he does, he figures out the algorithm for the shortest path. He's trying through words to convey memes, but it's hilarious.

I don't know if it's in post that we construct stories that romanticize it. Apparently when Newton, there was no apple. Especially when you're working on problems that have a physical manifestation or a visual manifestation, it feels like the world could be an inspiration to you. So it doesn't have to be completely on paper.

Like you could be sitting at a bar and all of a sudden see something in a pattern will spark another pattern and you can visualize it and rethink a problem in a particular way. Of course, you can also load the math that you have on paper and always carry that with you.

So when you show up to the bar, some little inspiration could be the thing that changes it. Is there any other people almost on the human side, whether it's physics with Feynman, Dirac, Einstein, or computer science, Turing, anybody else? Any backstories that you remember that jump out? Because I'm also referring to not necessarily these stories where something magical happens, but these are personalities.

They have big egos. Some of them are super friendly. Some of them are self-obsessed. Some of them have anger issues. Some of them, how do I describe Feynman? But he appears to have appreciation of the beautiful in all its forms. He has a wit and a cleverness and a humor about him.

So does that come into play in terms of the construction of the science? - I think you brought up Newton. Newton is a good example also to think about his backstory because there's a certain backstory of Newton that people always talk about, but then there's a whole another aspect of him that is also a big part of the person that he was.

But he was really into alchemy. And he spent a lot of time thinking about that and writing about it. And he took it very seriously. He was really into Bible interpretation and trying to predict things based on the Bible. And so there's also a whole backstory then. And of course, you need to look at it in the context and the time that when Newton lived, but it adds to his personality.

And it's important to also understand those aspects that maybe people are not as proud to teach to little kids, but it's important. It was part of who he was and maybe without those, who knows what he would have done otherwise. - Well, the cool thing about alchemy, I don't know how it was viewed at the time, but it almost like to me symbolizes dreaming of the impossible.

Like most of the breakthrough ideas kind of seem impossible until they're actually done. It's like achieving human flight. It's not completely obvious to me that alchemy is impossible or like putting myself in the mindset of the time. And perhaps even still, everything that, some of the most incredible breakthroughs would seem impossible.

And I wonder the value of believing almost like focusing and dreaming of the impossible such that it is actually is possible in your mind and that in itself manifests, whether the accomplishing that goal or making progress in some unexpected direction. So alchemy almost symbolizes that for me. - I distinctly remember having the same thought of thinking, when I learned about atoms and that they have protons and electrons, I was like, okay, to make gold, you just take whatever has an atomic weight below it and then shove another proton in there and then you have a bunch of gold.

So like, why don't people do that? It seemed like conceptually is like, this sounds feasible. You might be able to do it. - And you can actually, it's just very, very expensive. - Yeah, yeah, exactly. So in a sense, we do have alchemy and maybe even back then it wasn't as crazy that he was so into it, but people just don't like to talk about that as much.

Yeah, but Newton in general was a very interesting fella. - Anybody else come to mind? In terms of people that inspire you, in terms of people that you just are happy that they have once or still exist on this earth. - I think, I mean, Freeman Dyson for me.

Yeah, Freeman Dyson was, I've had a chance to actually exchange a couple of emails with him. He was probably one of the most humble scientists that I've ever met and that had a big impact on me. We were trying, we're actually trying to convince him to annotate a paper on Fermat's library and I sent him an email asking him if he could annotate a paper and his response was something like, I have very limited knowledge.

I just know a couple of things about certain fields. I'm not sure if I'm qualified to do that. That was his first response. And this was someone that should have won a Nobel Prize and worked on a bunch of different fields, did some really, really great work. And then just the interactions that I had with him, every time I asked him a couple of questions about his papers and he always responded saying, I'm not here to answer your questions.

I just want to open more questions. And so that had a big impact on me. It was like just an example of an extremely humble yet accomplished scientist. And Feynman was also a big, big inspiration in the sense that he was able to be, again, extremely talented and scientist, but at the same time, socially, he was able to, he was also really smart from a social perspective and he was able to interact with people.

He was also a really good teacher and was also to, did awesome work in terms of explaining physics to the masses and motivating and getting people interested in physics. And that for me was also a big inspiration. - Yeah, I like the childlike curiosity of some of those folks, like you mentioned, Feynman.

I have Daniel Kahneman, I got a chance to meet and interact with. Some of these truly special scientists, what makes them special is that even in older age, there's still that fire of childlike curiosity that burns. And some of that is like not taking yourself so seriously that you think you've figured it all out, but almost like thinking that you don't know much of it.

And that's like step one in having a great conversation or collaboration or exploring a scientific question. It's cool how the very thing that probably earned people the Nobel prize or work that's seminal in some way is the very thing that still burns even after they've won the prize. It's cool to see.

And they're rare humans, it seems. - And to that point, I remember like the last email that I sent to Freeman Dyson was like in his last birthday, he was really into number theory and primes. So what I did is I took like a photo of him, picture, and then I turned that into like a giant prime number.

So I converted the picture into a bunch of one and eights, and then I moved some numbers around until it was a prime. And then I sent him that. - Oh, so the visual, like it still looked like the picture, it was made up of a prime. That's tricky to do.

That's hard to do. - It looks harder than it actually is. So the way you do it is like you convert the darker regions into eights and the lighter regions in ones. And then there's- - And then you just keep flipping numbers until- - Yeah, but there's like some primality tests that are cheaper from a computational standpoint.

But what it tells you is it excludes numbers that are not prime. Then you end up with a set of numbers that you don't know if they are prime or not. And then you run the full primality test on that. So you just have to keep iterating on that.

And it's funny because when he got the picture, he was like, "How did you do that?" He was super curious too. And then we got into the details. And again, it was already 90, I think 92 or something. And that curiosity was still there. So you can really see that in some of these scientists.

- So could we talk about Fermat's library? - Yeah, absolutely. - What is it? What's the main goal? What's the dream? - It is a platform for annotating papers in its essence. And so academic papers can be one of the densest forms of content out there and generally pretty hard to understand at times.

And the idea is that you can make them more accessible and easier to understand by adding these rich annotations to the side. And so we can just imagine a PDF view on your browser, and then you have annotations on each side. And then when you click on them, a sidebar expands and then you have annotations that support LaTeX and Markdown.

And so the idea is that you can say, explain a tougher part of a paper where there's a step that is not completely obvious, or you can add more context to it. And then over time, papers can become easier and easier to understand and can evolve in a way.

But it really came from myself, Luis and two other friends. We've had this long running habit of kind of running a journal club amongst us. We come from different backgrounds. I studied CS, we studied physics. And so we'd read papers and present them to each other. And then we tried to bring some of that online.

And that's when we decided to build Fermat's library. And then over time, it kind of grew into something with a broader goal. And really what we're trying to do is trying to help move science in the right direction. That's really the ultimate goal and where we want to take it now.

So there's a lot to be said. So first of all, for people who haven't seen it, the interface is exceptionally well done. Execution is really important here. Absolutely. The other thing is just to mention for a large number of people, apparently, which is new to me, don't know what LaTeX is.

So it's spelled like latex. So be careful googling it if you haven't before. It's a, sorry, I don't even know the correct terminology. Type setting language? It's a type setting language where you're basically writing a program that then generates something that looks from a typography perspective, beautiful. Absolutely. And so a lot of academics use it to write papers.

I think there's like a bunch of communities that use it to write papers. I would say it's mathematics, physics, computer science. Yeah. That's, yeah. That's it. That's the main- Because I'm collaborating currently on a paper with two neuroscientists from Stanford. And they don't know what. So I'm using Microsoft Word and Mendeley and like all of those kinds of things.

And I'm being very Zen like about the whole process, but it's fascinating. It's a little heartbreaking actually, because it actually, it's funny to say, but we'll talk about open science, actually the bigger mission behind it for Mars Libraries, like really opening up the world of science to everybody. Is these silly two facts of like one community uses LaTeX and another uses Word, is actually a barrier between them.

It's like boring and practical in a sense, but this makes it very difficult to collaborate. Just on that, I think that if there are some people that should have received like a Nobel Prize, but will never get it. And I think one of those is like Donald Knuth because of tech and LaTeX then, because it had a huge impact in terms of like just making it easier for researchers to put their content out there, like making it uniform as much as possible.

Oh, you mean like a Nobel Peace Prize? Maybe a Nobel Peace Prize. Maybe a Nobel Peace Prize. Yeah, I think so. I mean, he had a very young age, got the Turing Award for his work in algorithms and so on. So like an incredibly, I think it's in, it might be even the sixties, but I think it's the seventies.

So when he was really young and then he went on to do like incredible work with his book and yeah, with tech that people don't know. And going back just on the reason why we ended up, because I think this is interesting. The reason why we ended up using the name Fermat's Library, this was because of Fermat's Last Theorem.

And Fermat's Last Theorem is actually a funny story. So Pierre de Fermat, he was like a lawyer and he wrote like on a book that he had a solution to Fermat's Last Theorem, but that didn't fit the margin of that book. And so Fermat's Last Theorem basically states that there is no solution.

If you have integers a, b and c, there's no solution to a to the power of n plus b to the power of n equals to c to the power of n if n is bigger than two. So there's no solutions. And he said that, and that problem remained open for almost 300 years, I believe.

And a lot of the most famous mathematicians tried to tackle that problem. No one was able to figure that out until Andrew Wiles, I think was in the nineties, was able to publish the solution, which was, I believe, almost 300 pages long. And so it's kind of an anecdote that, you know, there's a lot of knowledge and insights that can be trapped in the margins, and there's a lot of potential energy that you can release if you actually spend some time trying to digest that.

And that was the origin story for the name. - Yes, you can share the contents of the margins with the world. - Exactly. - That could inspire a solution or a communication that then leads to a solution. - And if you think about papers, like papers are, as João was saying, probably one of the densest pieces of text that any human can read.

And you have these researchers, like some of the brightest minds in these fields, working on like new discoveries and publishing these work on journals that are imposing them restrictions in terms of the number of pages that they can have to explain a new scientific breakthrough. So at the end of the day, papers are not optimized for clarity and for a proper explanation of that content, because there are so many restrictions.

So there's, as I mentioned, there's a lot of potential energy that can be freed if you actually try to digest a lot of the contents of papers. - Can you explain some of the other things? So margins, librarian, journal club? - So journal club is what a lot of people know us for, where we, every week, we release an annotated paper and in all sorts of different fields, but physics, CS, math.

Margins is kind of the same software that we use to run the journal club and to host the annotations, but we've made that available for free to anybody that wants to use it. And so folks use it at universities and for running journal clubs. And so we've just made that freely available.

And then librarian is a browser extension that we developed that is sort of an overlay on top of archive. So it's about bringing some of the same functionality around comments, plus adding some extra niceties to archive, like being able to very easily extract the references of a paper that you're looking at, or being able to extract the BibDeck in order to cite that paper yourself.

So it's an overlay on top of archive. - The idea is that you can have that commenting interface without having to leave archive. - It's kind of incredible. I didn't know about it. And once I learned of it, it's like, holy shit, why isn't it more popular given how popular archive is?

Like everybody should be using it. Archive sucks in terms of its interface. Or let me rephrase that, it's limited. Yeah. - Yeah. - In terms of its interface. - Archive is a pretty incredible project, right? And it is, in a way, it's, you know, the growth has been completely linear over time.

If you look at like number of papers published on archive, like, you know, it's pretty much a straight line for the past 20 years. Especially if you're coming from a startup background and then you were trying to do archive, you'd probably try to like all sorts of growth acts and like try to then maybe like have paid features and things like that.

And that would kind of maybe ruin it. And so there's a subtle balance there. And I don't know what aspects you can change about it. - Yeah. For some tools in science, it just takes time for them to grow. Archive is just turned 30, I believe. And for people that don't know, archive is this kind of online repository where people put preprints, which are versions of the papers before they actually make it to journals.

- ARXIV. - Exactly. - For people who don't know. And it's actually a really vibrant place to publish your papers and in the aforementioned communities of mathematics, physics and computer science. - It started with mathematics and physics. And then over the last 30 years, it evolved. And now actually computer science now, it's a more popular category than physics and math on archive.

- And there's also, which I don't know very much about like a biology, medical version of that. - BioRxiv. Yeah, BioRxiv. - More recent. - It's interesting because if you look at like these platforms for preprints, they actually play a super important role. Because if you look at a category like math, for some papers in math, it might take close to three years after you click upload paper on the journal website and the paper gets published on the website of the journal.

So this is literally the longest upload period on the internet. And during those three years, that content is just locked. And so that's why it's so important for people to have websites like ARXIV so that you can share that before it goes to the journal with the rest of the world.

That was actually on ARXIV that Perelman published the three papers that led to the proof of the Poincaré conjecture. And then you have other fields like machine learning, for instance, where the field is evolving at such a high rate that people don't even wait before the papers go to journals before they start working on top of those papers.

So they publish them on ARXIV, then other people see them, they start working on that. And ARXIV did a really good job at like building that core platform to host papers. But I think there's a really, really big opportunity in building more features on top of that platform, apart from just hosting papers.

So collaboration, annotations, like having other things apart from papers like code and other things. Because, for instance, in the field like machine learning, there's a really big... As I mentioned, people start working on top of preprints and they are assuming that that preprint is correct. But you really need a way, for instance, to maybe...

It's not peer review, but distinguish what is good work from bad work on ARXIV. How do you do that? So like a commenting interface like Librarian, it's useful for that so that you can distinguish that in a field that is growing so fast as machine learning. And then you have platforms that focus, for instance, on just biology.

BioARXIV is a good example. BioARXIV is also super interesting because there's actually an interesting experiment that was run in the '60s. So in the '60s, the NIH supported this experiment called the Information Exchange Group, which at the time was a way for researchers to share biology preprints via mail or using libraries.

And that project in the 1960s got canceled six years after it started. And it was due to intense pressure from the journals to kill that project because they were fearing competition from the preprints for the journal industry. Creek was one of the famous scientists that opposed to the Information Exchange Group.

And it's interesting because right now, if you analyze the number of biology papers that appear first as preprints, it's only 2% of the papers. And this was almost 50 years after that first experiment. So you can see that pressure from the journals to cancel that initial version of a preprint repo had a tremendous impact on the number of papers that are showing up in biology as preprints.

So it delayed a lot that revolution. But now platforms like BioRxiv are doing that work. But there's still a lot of room for growth there. And I think it's super important because those are the papers that are open, that everyone can read. - Okay, so but if we just look at the entire process of science as a big system, can we just talk about how it can be revolutionized?

So you have an idea, depending on the field, you wanna make that idea concrete, you wanna run a few experiments in computer science, there might be some code, there'd be a data set for some of the more sort of biology, psychology, you might be collecting the data set, that's called a study, right?

So that's part of that, that's part of the methodology. And so you are putting all of that into a paper form. And then you have some results. And then you submit that to a place for review through the peer review process. And there's a process where, how would you summarize the peer review process, but it's really just like a handful of people look over your paper and comment and based on that, decide whether your paper is good or not.

So there's a whole broken nature to it. At the same time, I love the peer review process when I buy stuff on Amazon, for like the commenting system, whatever that is. So, okay, so there's a bunch of possibilities for revolutions there. And then there's the other side, which is the collaborative aspect of the science, which is people annotating people commenting sort of the low effort collaboration, which is a comment.

Sometimes as you've talked about, a comment can change everything, but, or a higher effort collaboration, like more like maybe annotations or even like contributing to the paper. You can think of like a collaborative updating of the paper over time. So there's all these possibilities for doing things better than they've been done.

Can we talk about some ideas in this space, some ideas that you're working on, some ideas that you're not yet working on, but should be revolutionized? Because it does seem that archive and like open review, for example, are like the Craigslist of science. Like, yeah, okay. I'm very grateful that we have it, but it just feels like it's like 10 to 20 years.

Like it doesn't feel like that's a feature. The simplicity of it is a feature. It feels like it's a bug. But then again, the pushback there is Wikipedia has the same kind of simplicity to it. And it seems to work exceptionally well in the crowdsourcing aspect of it. So, sorry, there's a bunch of stuff going on on the table.

Let's just pick random things that we can talk about. - Wikipedia, for me, it's the cosmological constant of the internet. I think we are lucky to live in the parallel universe where Wikipedia exists. Because if someone had pitched me Wikipedia, like a publicly edited encyclopedia, like a couple of years ago, like it would be, I don't know how many people would have said that that would have survived.

- I mean, it makes almost no sense. It's like having a Google doc that everybody on the internet can edit. And like, that will be like the most reliable source for knowledge. - I don't know how many, but hundreds of thousands of topics. - Yeah. It's insane. - It's insane.

And then you have users, like there's a single user that edited one third of the articles on Wikipedia. So we have these really, really big power users. They are a substantial part of like what makes Wikipedia successful. And so, like, no one would have ever imagined that that could happen.

And so that's one thing. I completely agree with what you just said. - Sorry to interrupt briefly. Maybe let's inject that into the discussion of everything else. I also believe, I've seen that with Stack Overflow, that one individual or a small collection of individuals contribute or revolutionize most of the community.

Like if you create a really powerful system for archive or like open review and made it really easy and compelling and exciting for one person who is in like a 10X contributor to do their thing, that's going to change everything. It seems like that was the mechanism that changed everything for Wikipedia.

And that's the mechanism that changed everything for Stack Overflow is gamifying or making it exciting or just making it fun or pleasant or fulfilling in some way for those people who are insane enough to like answer thousands of questions or write thousands of factoids and like research them and check them, all those kinds of things, or read thousands of papers.

- Yeah. No, Stack Overflow is another great example of that. And it's just, and those are both two incredibly productive communities that generate a ton of value and capture almost none of it. And in a way it's almost like counter, it's very counterintuitive that people, that these communities would exist and thrive.

And it's really hard to, there aren't that many communities like that. - So how do we do that for science? Do you have ideas there? Like what are the biggest problems that you see? Are you working on some of them? - Just on that, there are a couple of really interesting experiments that people are running.

An example would be like the polymath projects. So this is a kind of a social experiment that was created by Tim Gowers, Fields medalist. And his idea was to try to prove that is it possible to do mathematics in a massively collaborative way on the internet? So he decided to pick a couple of problems and test that.

And they found out that it actually, it is possible for specific types of problems, namely problems that you're able to break down in little pieces and go step by step. You might need, as with open source, you might need people that are just kind of reorganizing the house every once in a while.

And then people throw a bunch of ideas and then you make some progress, then you reorganize, you reframe the problem and you go step by step. But they were actually able to prove that it is possible to collaborate online and do progress in terms of mathematics. And so I'm confident that there are other avenues that could be explored here.

- Can we talk about peer review, for example? - Absolutely. I think like in terms of the peer review, I think it's important to look at the bigger picture here of like, of what the scientific publishing ecosystem looks like. Because for me, there are a lot of things that are wrong about that entire process.

So if you look at, for instance, at what publishing means in like a traditional journal, you have journals that pay authors for their articles, and then they might pay like reviewers to review those articles. And finally, they pay people to, or distributors to distribute the content. In the scientific publishing world, you have scientists that are usually backed by government grants, they are giving away their work for free in the form of papers.

And then you have other scientists that are reviewing their work. This process is known as the peer review process, again for free. And then finally, we have government backed universities and libraries that are buying back all that work so that other scientists can read. So this is, for me, it's bizarre.

You have the government that is funding the research, is paying the salaries of the scientists, is paying the salaries of the reviewers, and it's buying back all that product of their work again. And I think the problem with this system, and it's why it's so difficult to break this suboptimal equilibrium, is because of the way academia works right now and the way you can progress in your academic life.

And so, in a lot of fields, the competition in academia is really insane. So you have hundreds of PhD students that are trying to get to a professor position, and it's hyper competitive. And the only way for you to get there is if you publish papers, ideally in journals, with a high impact factor.

In computer science, it's often conferences that are also very prestigious, or actually more prestigious than journals now. Okay, interesting. So that's the one discipline where, I mean, that has to do with the thing we've discussed in terms of how quickly the field turns around. But like, NeurIPS, CVPR, those conferences are more prestigious, or at the very least, as prestigious as the journals.

But yeah, but doesn't matter. The process is what it is. So for people that don't know, the impact factor of a journal is basically the average number of citations that a paper would get if it gets published on that journal. But so, you can really think that the problem with the impact factor is that it's a way to turn papers into accounting units.

And let me unpack this, because the impact factor is almost like a nobility title. So because papers are born with impact, even before anyone reads them. So the researchers, they don't have the incentive to care about if this paper is going to ever have a long-term impact on the world.

What they care, their goal, their end goal is the paper to get published. So that they get that value up front. So for me, that is one of the problems of that. And that really creates a tyranny of metrics. Because at the end of the day, if you are a dean, what you want to hire is like people, researchers that publish papers on journals with high impact factors, because that will increase the ranking of your university and will allow you to charge more for tuition, so on and so forth.

And that, especially when you are in super competitive areas, that people will try to gamify that system and misconduct starts showing up. There's a really interesting book on this topic called Gaming the Metrics. It's a book by a researcher called Mario Biagioli. It goes a lot into like how the impact factor and metrics affect science negatively.

And it's interesting to think, especially in terms of citations, if you look at the early work of like looking at citations, there was a lot of work that was done by a guy called Eugene Garfield. And this guy, the early work in terms of citation, they wanted to use citations as from a descriptive point of view.

So what they wanted to create was a map. And that map would create a visual representation of influence. So citations would be links between papers. And ideally what they would show, they would represent is that you read someone else's paper and it had an impact on your research. They weren't supposed to be counted.

I think this inspired like Larry and Sergey's work, right? For Google. Exactly. I think they even mentioned that. But what happens is like, as you start counting citations, you create a market. And the same way, like, and this was the work of Eugene Garfield was a big inspiration for Larry and Sergey for the page rank algorithm that led to the creation of Google.

And they even recognized that. And if you think about it, it's like the same way there's a gigantic market for search engine optimization, SEO, where people try to optimize the page rank and how a web page will rank on Google. The same will happen for papers. People will try to optimize the impact factors and the citations that they get.

And that creates a really big problem. And it's super interesting to actually analyze them. If you look at the distribution of the impact factors of journals, you have like nature with nature, I believe it's like in the low 40s. And then you have, I believe science is high 30s.

And then you have a really good set of good journals that will fall between 10 and 30. And then you have a gigantic tale of journals that have impact factor below two. And you can really see two economies here. You see the universities that are maybe less prestigious, less known, that where the faculty are pressured to just publish papers, regardless of the journal, what I want to do is increase the ranking of my university.

And so they end up publishing as many papers as they can in journals with low impact factor. And unfortunately, this represents a lot of the global South. And then you have the luxury good economy. So for instance, and there are also problems here in the luxury good economy. So if you look at the journal like nature, so with impact factor of like in the low 40s, there's no way that you're going to be able to sustain that level of impact factor by just grabbing the attention of scientists.

What I mean by that is like, for the journals, the articles that get published in nature, they need to be New York Times great. So they need to make it to the big media, they need to be captured by the big media. Because that's the only way for you to capture enough attention to sustain that level of citations.

And that of course, creates problems because people then will try to, again, gamify the system and have like titles or abstracts or that are bigger, make claims that are bigger than what is actually can be sustained by the data or the content of the paper. And you will have clickbait titles or clickbait abstracts.

And again, this is all a consequence of metrics and science of metrics. And this is a very dangerous cycle that I think it's very hard to break, but it's happening in academia in a lot of fields right now. - Is it fundamentally the existence of metrics or the metrics just need to be significantly improved?

Because like I said, the metrics used for Amazon for purchasing, I don't know, computer parts is pretty damn good in terms of selecting which are the good ones, which are not. In that same way, if we had Amazon type of review system in the space of ideas, in the space of science, it feels like those metrics would be a little bit better.

Sort of when it's significantly more open to the crowdsource nature of the internet, of the scientific internet, meaning as opposed to, like my biggest problem with peer review has always been that it's like five, six, seven people, usually even less. And it's often nobody's incentivized to do a good job in the whole process.

Meaning it's anonymous in a way that doesn't incentivize, like doesn't gamify or incentivize great work. And also it doesn't necessarily have to be anonymous. Like there has to be, the entire system doesn't encourage actual sort of rigorous review. For example, like open review does kind of incentivize that kind of process of collaborative review, but it's also imperfect.

It just feels like the thing that Amazon has, which is like thousands of people contributing their reviews to a product. It feels like that could be applied to science where the same kind of thing you're doing with Fermat's library, but doing at a scale that's much larger. It feels like that should be possible given the number of grad students, given the number of general public that's, like for example, I personally, as a person who got an education in mathematics and computer science, like I can be a quote unquote like reviewer on a lot bigger set of things than is my exact expertise.

If I'm one of thousands of reviewers, if I'm the only reviewer, one of five, then I better be like an expert in the thing. But if I, and I've learned this with COVID, which is like, you can just use your basic skills as a data analyst as a, and to contribute to the review process on a particular little aspect of a paper and be able to comment, be able to sort of draw in some references that challenge the ideas presented or to enrich the ideas that are presented, or, you know, it just feels like crowdsourcing the review process would be able to allow you to have metrics in terms of how good a paper is that are much better representative of its actual impact in the world, of its actual value to the world, as opposed to some kind of arbitrary gamified version of its impact.

- I agree with that. I think we, there's definitely the possibility, at least for more resilient, a more resilient system than what we have today. And it's, I think that's kind of what you're describing, Alex. And I mean, to an extent we kind of have like a little bit of a Heisenberg uncertainty principle.

When you pick a metric, as soon as you do it, then maybe it works as a good heuristic for a short amount of time, but soon enough people would start gamifying and, but then you can definitely have metrics that are more resilient to gamification and they'll work as a better heuristic to try to push you in the best direction.

- But I guess the underlying problem you're saying is there's a shortage of positions in academia. - That's a big problem for me. - Yeah. And that, and so they're going to be constantly gamifying the metrics. - It's a bit of a zero sum game. - It's a very competitive, it's a very competitive field.

And that's what usually happens in very competitive fields. - Yeah. - But I think some of like the peer review problems, like scale helps, I think. And it's interesting to look at like what you're mentioning, breaking it down, maybe in like smaller parts and having more people jumping in.

But this is definitely a problem. And the peer review problem, as I mentioned, is correlated with the problem of like academic career progression and it's all intertwined. And that's why I think it's so hard to break it. There are like a couple of really interesting things that are being done right now.

There are a couple of, for instance, journals that are overlay journals on top of platforms like archive and bio-archive that want to remove like the more traditional journals from the equation. So essentially a journal is just a collection of links to papers. And what they are trying to do is like removing that middleman and trying to make the review process a little bit more transparent and not charging universities.

Like there are a couple of more famous ones. There's one discreet analysis in mathematics. There's one called the Quantum Journal, which we're actually working with them. We have a partnership with them for the papers that get published in Quantum Journal. They also get the annotations on formats and they're doing pretty well.

They've been able to grow substantially. The problem there is getting to critical mass. So it's again, convincing the researchers and especially the young researchers that need that impact factor, need those publications to have citations to not publish on the traditional journal and go on an open journal and publish their work there.

I think there are a couple of really high profile scientists, people like Tim Gowers, that are trying to incentivize like famous scientists that already have tenure and that don't need that to publish that to increase the reputation of those journals so that other maybe younger scientists can start publishing on those as well.

And so they can try to break that vicious cycle of the more traditional journals. - I mean, another possible way to break this cycle is to like raise public awareness and just by force, like ban paid journals. Like what exactly are they contributing to the world? Like basically making it illegal to forget the fact that it's mostly federally funded.

So that's a super ugly picture too. But like, why should knowledge be so expensive? Like where everyone is working for the public good and then there's these gatekeepers that, you know, most people can't read most papers without having to pay money. And that doesn't make any sense. Like that should be illegal.

- I mean, that's what you're saying is exactly right. I mean, for instance, right, I went to school here in the US, we studied in Europe and you would sit, like you'd ask me all the time to download papers and send it to him because he just couldn't get it and like papers that he needed for his research.

And so- - But he's a student, like he's- - Yeah, he's a grad student. - He was a grad student, but that, you know, I'm even referring to just regular people. - Oh yeah, okay, that too. - And I think during 2020, because of COVID, a lot of journals put down the walls for certain kind of coronavirus related papers.

But like that just gave me an indication that like, this should be done for everything. It's absurd. Like people should be outraged that there's these gates because, so the moment you dissolve the journals, then there'll be an opportunity for startups to build stuff on top of archive. It'd be an opportunity for like Fermat's library to step up, to scale up to something much even larger.

I mean, that was the original dream of Google, which I've always admired, which is make the world's information accessible. Actually, it's interesting that Google hasn't, maybe you guys can correct me, but they have put together Google Scholar, which is incredible. And they've did the scanning of books, but they haven't really tried to make science accessible in the following way.

Like besides doing Google Scholar, they haven't like delved into the papers, right? - Which is especially curious given what Luis was saying, right? That it's kind of in their genesis, there's this research that was very connected with how papers reference each other and like building a network out of that.

- Interesting enough, like Google, I think there was not intended, Google Plus was like the Google social network that caught cancel, was used by a lot of researchers. - Yes, it was. - With what I think was just a side, kind of a side effect, but then a lot of people ended up migrating to Twitter, but it was not on purpose.

But yeah, I agree with you, like they haven't gone past Google Scholar and I don't know why. - Well, that said, Google Scholar is incredible. For people who are not familiar, it's one of the best aggregation of all the scientific work that's out there and especially the network that connects all of them, what sites, what, and also trying to aggregate all of the versions of the papers that are available there and trying to merge them in a way that one particular work, even though it's available in a bunch of places, counts as, you know, like a central hub of what that work is across the multiple versions.

But that almost seems like a fun pet project of a couple of engineers within Google, as opposed to a serious effort to make the world science accessible. - But going back to just the journals, when you're talking about that, Lex, I believe that in that front, I think we might be past the event horizon.

So I think the business model for the journals doesn't make sense. They're a middle layer that is not adding a lot of value and you see a lot of motions, whereas like in Europe, a lot of the papers that are funded by the European Union, they will have to be open to the public.

And I think there's a lot of... - Bill Gates too, like what the Gates Foundation funds, like the demand that it's accessible to everybody. - So I think it's a question of time before that wall kind of falls. And that is going to open a lot of possibilities. Because, you know, imagine if you had like the layer of, that gigantic layer of papers all available online, that unlocks a lot of potential as a platform for people to build things on top of that.

- But I think to what you're saying, it is weird. Like you can literally go and listen to any song that was ever made on your phone, right? You open Spotify and you might not even pay for it. You might be on the free version and you can listen to any song that was ever made pretty much.

But there's like, you don't have access to a huge percentage of academic papers, which is just like this fundamental knowledge that we're all funding, but you as an individual don't have access to it. And somehow, you know, like the problem for music got solved, but for papers, it's still like...

- It's just not yet. It could be ad supported, all those kinds of things. And then hopefully that would change the way we do science. That's the most exciting thing for me, is especially once I started like making videos and this silly podcast thing, I started to realize that if you want to do science, one of the most effective ways is to do a couple, the paper with a set of YouTube videos, like explaining it.

- Yeah. That also seems like there's a lot of room for disruption there. What is the paper 2.0 going to look like? I think like LaTeX and the PDF seems like, if you look at the first paper that got published in Nature, and if you look at the paper that got published in Nature today, look at the two side by side, they are fundamentally the same.

And even though like the paper that gets published today, you know, you get even code, like right now people put like code on a PDF. And there are so many things that are related to papers today. You have data, you have code, you might need videos to better explain the concepts.

So I think for me, it's natural that there's going to be also an evolution there, that papers are not going to be just the static PDFs or LaTeX. There's going to be a next interface. - So in academia, a lot of things that you're judged by is often quantity, not quality.

I wonder if there's a opportunity to have like, I tend to judge people by the best work they've ever done as opposed to... I wonder if there's a possibility for that to encourage sort of focusing on the quality and not necessarily in paper form, but maybe a subset of a paper, subset of idea, almost even a blog post or an experiment.

Like why does it have to be published in a journal to be legitimate? - And it's interesting that you mentioned that. I also think like, yeah, why is that the only format? Why can't a blog post or... We were even experimenting with these a few months ago. Can you actually like publish something or like a new scientific breakthrough or something that you've discovered in the form of like a set of tweets, a Twitter thread?

Why can't that be possible? And we were experimenting with that idea. We even, yeah, we ran a couple of, like some people submitted a couple of those, like I think the limit was three or four tweets. Maybe it's a new way to look at a proof or something, but I think it just serves to show that there should be other ways to publish like scientific discoveries that don't fit the paper format.

- Well, but so even with the Twitter thread, it would be nice to have some mechanism of formalizing it and making it into an NFT. - Maybe. - Like a concrete thing that you can reference with a link that's unique. Because I mean, everything we've been saying, all of that, while being true, it's also true that the constraints and the formalism of a paper works well.

It like forces you, constraints forces you to narrow down your thing and literally put it on paper, but you know. - I agree. - Make concrete. And that's why, I mean, it's not broken. It's just could be better. And that's the main idea. I think there's something about writing, whether it's a blog post or Twitter thread or a paper, that's really nice to concretize a particular little idea that can then be referenced by other ideas, then it can be built on top of with other ideas.

So let me ask, you've read quite a few papers, you've annotated quite a few papers. Let's talk about the process itself. How do you advise people read papers? Or maybe you want to broaden it beyond just papers, but just read concrete pieces of information to understand the insights that lay within.

- I would say for papers specifically, I would bring back kind of what Luis was talking about, is that it's important to keep in mind that papers are not optimized for ease of understanding. And so, right, there's all sorts of restrictions in size and format and language that they can use.

And so it's important to keep that in mind. And so that if you're struggling to read a paper, that might not mean that the underlying material is actually that hard. And so that's definitely something that, especially for us, that we read papers and most of the times we'll read papers that are completely outside of our comfort zone, I guess.

And so it'd be completely new areas to us. So I always try to keep that in mind. - So there's usually a certain kind of structure, like abstract introduction, there's methodology, depending on the community and so on. Is there something about the process of like how to read it, whether you want to skim it to try to find the parts that are easy to understand and not, reading it multiple times?

Is there any kind of hacks that you can comment on? - I remember like Feynman had this kind of hack when he was reading papers where he would basically, I think I believe he would read the conclusion of the paper and we would try to just see if he would be able to figure out how to get to the conclusion in like a couple of minutes by himself.

And he would read a lot of papers that way. And I think Fermi also did that almost. And Fermi was known for doing a lot of back of the envelope calculations. So he was a master at doing that. In terms of like, especially when reading a paper, I think a lot of times people might feel discouraged about the first time you read it, it's very hard to grasp or you don't understand a huge fraction of the paper.

And I think it's having read a lot of papers in my life, I think I'm in peace with like the fact that you might spend hours where you're just reading a paper and jumping from paper to paper, reading citations. And like your level of understanding of sometimes of the paper is very close to 0%.

And all of a sudden, everything kind of makes sense in your mind. And then you have this quantum jump where all of a sudden you understand the big picture of the paper. And this is an exercise that I have to do when reading papers, and especially like more complex papers, like, okay, you don't understand because you're just going through the process and just keep going.

And it might feel super chaotic, especially if you're jumping from reference to reference. You might end up with like 20 tabs open and you're reading a ton of other papers, but it's just trusting that process, that at the end, like you'll find light. And I think for me, that's a good framework when reading a paper.

It's hard because you might end up spending a lot of time and it looks like you're lost, but that's the process to actually understand what they're talking about in the paper. - Yeah, I think that process, I've found a lot of value in the process, especially for things outside my field, reading a lot of related work sections and kind of going down that path of getting a big context of the field, because what's, especially when they're well-written, there's opinions injected into the related work.

Like what work is important, what is not. And if you read multiple related work sections that cite or don't cite each other, like the papers, you get a sense of where the field, where the tensions of the field are, where the field is striving. And that helps you put into context, like whether the work is radical, whether it's overselling itself, whether it's underselling itself, all those things.

And added on top of that, I find that often the related work section is the most kind of accessible and readable part of a paper, because it's kind of, it's brief to the point, it's trying to like summarizing, it's almost like a Wikipedia style article. The introduction is supposed to be a compelling story or whatever, but it's often like overselling, there's like an agenda introduction.

The related work usually has the least amount of agenda, except for the few like elements where you're trying to talk shit about previous work, where you're trying to sell that you're doing much better. But other than that, when you're just painting where the field came from or where the field stands, that's really valuable.

And also, again, just to agree with Fiona in the conclusion, but it's like, I get a lot of value from the breadth first search, kind of read the conclusion, then read the related work, and then go through the references in the related work, read the conclusion, read the related work, and just go down the tree until you like hit dead ends or run out of coffee.

And then through that process, you go back up the tree and now you can see the results in their proper context. Unless of course the paper is truly revolutionary, which even that process will help you understand that is in fact truly revolutionary. You've also, you talked about just following your Twitter thread in a depth first search.

You talked about that you read the book on Grisha Perlman, Grigori Perlman, and then you had a really nice Twitter thread on it and you were taking notes throughout. So at a high level, is there suggestions you can give on how to take good notes? Whether it's, we're talking about annotations or just for yourself to try to put on paper ideas as you progress through the work in order to then like understand the work better?

For me, I always try not to underestimate how much you can forget within six months after you've read something. I thought you were going to say five minutes, but yeah, six months is good. Yeah, or even shorter. And so that's something that I always try to keep in mind.

And it's often, I mean, every once in a while I'll read back a paper that I annotated on Fermat and I'll read through my own annotations and I've completely forgotten what I had written. But it also, it's interesting because in a way, after you just understood something, you're kind of the best possible teacher that can teach your future self.

After you've forgotten it, you're kind of your own best possible teacher at that moment. And so it can be great to try to capture that. It's brilliant. It just made me kind of realize it's really nice to put yourself in the position of teaching an older version of yourself that returns to this paper, almost like thinking it literally.

That's under explored, but it's super powerful because you were the person that you can, if you look at the scale from one, not knowing anything about the topic, and 10, you are the one that progressed from one to 10 and you know which steps you struggled with. So you're really the best person to help yourself make that transition from one to 10.

And a lot of the times, I really believe that the framework that we have to expose ourselves to be talking to us when we were an expert, when we were taking that class and we knew everything about quantum mechanics. And then six months later, you don't remember half of the things.

How could we make it easier to have those conversations between you and your past expert self? I think there might be, it's an under explored idea. I think notes on paper are probably not the best way. I'm not sure if it's a combination of video, audio, where it's like you have a guided framework that you follow to extract information from yourself so that you can later kind of revisit to make it easier to remember.

But that's, I think it's an interesting idea worth exploring that I haven't seen a lot of people kind of trying to distill that problem. Yeah, creating the kind of tools. I find if I record, it sounds weird, but I'll take notes. But if I record audio, like little clips of thoughts, like rants, that's really effective at capturing something that notes can't.

Because when I replay them, for some reason, it loads my brain back into where I was when I was reading that in a way that notes don't. Like when I read notes, I'll often be like, "Whoa, what? What was I thinking there?" But when I listened to the audio, it brings you right back to that place.

And maybe with video, with visual, that might be even more powerful. I think so. Yeah. And I think just the process of verbalizing it, that alone kind of makes you have to structure your thought and put it in a way that somebody else could come and understand it. And just the process of that is useful to organize your thoughts.

And yeah, just that alone. Does the Fermat's Library Journal Club have a video component or no? Not natively. Sometimes we'll include videos, but it's always embedded. Do people build videos on top of it to explain the paper? Because you're doing all the hard work of understanding deeply the paper.

We haven't seen that happening too much, but we were actually playing around with the idea of creating some sort of podcast version where we try to distill the paper on an audio format that not maybe you can have access to. Might be trickier, but there are definitely people that could be interested in the paper and that topic, but are not willing to read it.

But they might listen to a 30-minute episode on that paper. You could reach more people and you might even bring the authors to the conversation, but it's tricky, especially for more technical papers. We've thought about doing that, but we haven't converged. I'm sure if you have any... Well, I'm going to take that as a small project to take one of the Fermat's, almost half advertisement and half as a challenge for myself, to take one of the annotated papers and use it as a basis for creating a quick video.

I've seen, hopefully I'm saying the name correctly, but Machine Learning Street Talk. I think that's the name of the show that I recommend highly. That's the right name. But they do exactly that, which is multiple hour breakdown of a paper with video component. Sometimes with authors, people love it.

It's very effective. There's also, I've seen, I haven't seen the entire, in its entirety, but I've seen the founder of comma.ai, George. I've seen him just taking a paper and then distilling the paper and coding it, coding it sometimes doing 10 hours. And he was able to get a lot of people interested in that and viewing him.

- So I'm a huge fan of that. George is a personality. I think a lot of people listen to this podcast for the same reason. It's not necessarily the contents. They like to listen to a silly Russian who has a childlike brain and mumbles and all those like struggle with ideas.

And George is a madman. People just enjoy, how is he going to struggle in implementing this particular paper? How is he going to struggle with this idea? It's fun to watch and that actually pulls you in. The personality is important there. - True. I agree with you, but there also, it's visible.

There's an extraordinary ability that is there. He's talented and you need to have, there's a craft and this guy definitely has talent and he's doing something that is not easy. And I think that also draws the attention of people. And the other day, we were actually, we ran into this YouTube channel of this guy that was restoring art.

And it was basically just a video of him. The production is really well done. And it's just him taking really old pieces of art and then paintings and then restoring them. But he's really good at that. And he describes that process and that draws the attention of people, regardless of your craft, be it like annotating a paper, restoring it.

- Craftsmanship, excellence. Yeah. George is incredibly good at programming. You know, those competitive programmers, like Top Coater and all those kinds of stuff. He has the same kind of element where the brain just jumps around really quickly. And that's, yeah, just like with art restoration. - And it's also motivating.

- Yeah, it's motivating. But you're right, in watching people who are good at what they do, it's motivating, even if the thing you're trying to do is not what they're doing. It's just like contagious when they're really good at it. And the same kind of analysis with the paper, I think, so not just like the final result, but the process of struggling with it.

That's really interesting. - Yeah. I think Twitch proved that like, you know, that there's really a market for that, for watching people do things that they're really good at and you'll just watch it. You will enjoy that, that might even spike your interest in that specific topic. And yeah, people will enjoy watching sometimes hours on end of great craftsmen.

- Do you mind if we talk about some of the papers, do any papers come to mind that have been annotated on Fermat's library? - The papers that we annotated can be about completely random topics, but that's part of what we enjoy as well. It forces you to explore these topics that otherwise maybe you'd never run into.

And so the ones that come to mind are, to me, are fairly random, but one that I really enjoyed learning more about is a paper written by a mathematician, actually, Tom Apostol, and about a tunnel in a Greek island off the coast of Turkey. So it's very random. So this, okay, so what's interesting about this tunnel?

So this tunnel was built in the 6th century BC and it was built in the island of Samos, which is, as I said, off the coast of Turkey. And they had the city on one side and the other mountain, and then they had a bunch of springs on the other side and they wanted to bring water into the city.

Building an aqueduct would be pretty hard because of the way the mountain was shaped. And it would also, if they were under a siege, they could just easily destroy that aqueduct and then the water wouldn't have any water supply, the city wouldn't have any water supply. And so they decided to build a tunnel and they decided to try to do it quickly.

And so they started digging from both ends at the same time through the mountain. And so when you start thinking about this, it's a fairly difficult problem. And this is like 6th century BC. So you had very limited access to the mathematical tools that you had at the time were very limited.

And so what this paper is about is about the story of how they built it and about the fact that for about 2000 years, the accepted explanation of how they built it was actually wrong. And so this tunnel has been famous for a while. There are a number of historians that talked about it since ancient Egypt.

And the method that they described for building it was just wrong. And so these researchers went there and were able to figure that out. And so basically, kind of the way that they thought they had built it was basically, if you can imagine looking at the mountain from the top and you have the mountain, then you have both entrances.

And so what they thought, and this is what the ancient historians described, is that they effectively tried to draw a right angle triangle with the two entrances at each end of the hypotenuse. And the way they did is like they would go around the mountain and kind of walking in a grid fashion.

And then you can figure out the two sides of the triangle. And then after you have that triangle, you can effectively draw two smaller triangles at each entrance that are proportional to that big triangle. And then you kind of have arrows pointing in each way. And then you know at least that you have a line going through the mountain that connects both entrances.

The issue with that is like, once you go to this mountain and you start thinking of doing this, you realize that, especially given that the tools that they had at the time, that your error margin would be too small. You wouldn't be able to do it. Just the fact of trying to build this triangle in that fashion, the error would accumulate and you would end up missing.

You'd start building these tunnels and they would miss each other. - So the task ultimately is to figure out like really perfectly, as close as possible, the direction you should be digging. First of all, that it's possible to have a straight line through and then what that direction would be.

And then you are trying to infer that by constructing a right triangle. I'm not exactly sure about how to do that rigorously, like by tracing the mountain, by walking along the mountain. You said grids? - Yeah, you kind of walk as if you were in a grid and so you just walk in right angles.

- But then you have to walk really precisely then. You have to use tools to measure this and the terrain is probably a mess. So this makes more sense in 2D and 3D gets even weirder. So, okay, gotcha. - But so this method was described by an ancient Egyptian historian, I think, Hero of Alexandria.

And then for about 2000 years, that's how we thought that they'd built this tunnel. And then these researchers went there and found out that actually they must have to use other methods. And then in this paper, they describe these other methods. And of course, they can't know for sure, but they presented a bunch of plausible alternatives.

The one that for me is the most plausible is that what they probably must have done is to use something that is similar to an iron sight on a rifle, the way you can line up your rifle with a target off in the distance by having an iron sight.

And they must have done something similar to that effectively with three sticks. And that way they were able to line up sticks along the side of the mountain that were all on the same height. And so that then you could get to the other side, and then you could draw that line.

So this for me is the most plausible way that they might have done that. But then they described this in detail and other possible approaches in this paper. - So this is a mathematician doing this? - Yeah, this is a mathematician that did this. - Which I suppose is the right mindset instead of skills required to solve an ancient problem.

- Yeah. - Mathematicians and engineers, there are a lot of things. - Because they didn't have computers or drones or LIDAR back then or whatever technology you would use modern day for civil engineering. - Yeah. And another fascinating thing is that effectively after the downfall of the Roman civilization, people didn't build tunnels for about a thousand years.

We go a thousand years without tunnels. And then only in late middle ages that we start doing them again. But here is a tunnel like sixth century BC, like incredibly limited mathematics. And they build it in this way. And it was a mystery for a long time exactly how they did it.

And then these mathematicians went there and basically with no archeology kind of background, were able to figure it out. - How do annotations for this paper look like? What's a successful annotation for paper like this? - Yeah. So sometimes you're for this paper, sometimes adding some more context on a specific part.

Like sometimes they mentioned, for instance, these instruments that were common in ancient Greece and ancient Rome for building things. And so in some of those annotations, I described these instruments in more detail and how they worked. 'Cause sometimes it can be hard to visualize these. Then this paper, I forget exactly when this was published.

I believe maybe the seventies. But then there was further research into this tunnel and other interesting aspects about it. I add those to that paper as well. There's historical context that I also go into there. For instance, the fact that as I said, that effectively after the downfall of the Roman empire, no tunnels were built.

Like this is something that I added to the paper as well. Yeah. So those are- - So when other people look at the paper, how did they usually consume the annotations? So it's like, is there a commenting feature? I mean, like this is a really enriching experience the way you read a paper.

What aspects do people usually talk about that they value from this? Yeah. So anybody can just go on there and either add a new annotation or other comments to an existing annotation. And so you can start a kind of a thread within an existing annotation. And that's something that happens relative frequency.

And then because I was the original author of the initial annotation, I get pinged. And so oftentimes I'll go back and add on to that thread. - How'd you pick the paper? I mean, first of all, this whole process is really exciting. I'm gonna, especially after this conversation, I'm gonna make sure I participate much more actively on papers that I know a lot about and on paper I know nothing about.

I should- - You get to annotate the paper. - I would love to. I also, I mean, I realized that there's a, like, it's an opportunity for people like me to publicly annotate a paper. Like- - Or do an AMA around the paper. - Yeah, exactly. But yeah, but like be in the conversation about a paper.

It's like a place to have a conversation about an idea. The other way to do it that's much more ad hoc is on Twitter, right? But this is more like formal and you could actually probably integrate the two. They have a conversation about the conversation. So the Twitter is the conversation about a conversation and the main conversation is in the space of annotations.

- There's an interesting effect that we see sometimes with the annotations on our papers is that a lot of people, especially if the annotations are really well done, people sometimes are afraid of adding more annotations because they see that as a kind of a finished work. And so they don't want to pollute that.

And especially if it's like a silly question. I don't think that's good. I think we should as much as possible try to lower the barrier for someone to jump in and ask questions. I think most of the times it adds value, but it's some feedback that we got from users and readers.

I'm not exactly sure how to kind of fight that, but... - Well, I think if I serve as an inspiration in any way is by asking a lot of dumb questions and saying a bunch of dumb shit all the time. And hopefully that inspires the rest of the other folks to do the same, because that's the only way to knowledge, I think, is to be willing to ask the dumb questions.

- And there are papers that are like... We have a lot of papers on Fermat's where it's just one page, or really short papers. And we have like the shortest paper ever published in a math journal, with just a couple of words. One of my favorite papers on the platform is actually a paper written by Enrico Fermi.

And the title of the paper is, I think, is "My Observations at Trinity." So basically Fermi was part of the Manhattan Project. So he was in New Mexico when they exploded the first atomic bomb. And so he was a couple of miles away from the explosion, and he was probably one of the first persons to calculate the energy of the explosion.

And so the way he did that was he took a piece of paper and he tore down a piece of paper in little pieces. And when the bomb exploded, the Trinity bomb was the name of the bomb, he waited for the blast to arrive at where he was. And then he threw those pieces of paper in the air, and he calculated the energy based on the displacement of the paper, the pieces of paper.

And then he wrote a report, which was classified until a couple of years ago. One page report, calculating the energy of the explosion. - Oh, that's so badass. - And we actually went there and kind of unpacked and I think he just mentions basically the energy, and we actually went, and one of the annotations is like explaining how he did that.

- I wonder how accurate he was. - It was maybe, I think, like 20 or 25% off. Then there was another person that actually calculated the energy based on images after the explosion and the rate at which the mushroom of the explosion expanded. And it's more accurate to calculate the energy based on that.

And I think it was like 20, 20% off. But it's really interesting because Fermi was known for all these, being a master at these back of the envelope calculations, all those, like the Fermi problems are well known for that. And it's super interesting to see that just one page report, it was also actually classified.

And it's interesting because a couple of months ago, when the Beirut explosion happened, there was a video circulating of these a bride that was doing a photo shoot when the explosion in Beirut happened. And so you can see a video of her with the wedding dress, and then the explosion happens and the blast arrives at where she was.

She was a couple of miles away from the blast. And you can see like the displacement of the dress as well. And I actually looked, and that video went viral on Twitter. And I actually looked at that video and used the same techniques that Fermi used to calculate the energy of the explosion based on the displacement of the dress.

And you could actually see where she was at the distance from the explosion, because there was a store behind her and you could look the name of the store. And so I calculated that. - The distance and then you can... - Based on the distance where she was from the explosion and also on the displacement of the dress, like because you can, when the blast happens, like you can see the dress going back and then going back to the original position.

And like, by just looking at how much the dress moved, you can estimate the energy of the explosion. - I assume you published this. - On Twitter, it was just a Twitter thread. But actually like a lot of people share that and it was picked up by a couple of news outlets.

- I was hoping it would be like a formal title and it would be an archive. - No, no, no, no. - It may be submitted. - Just the Twitter thread. But it was interesting because it was exactly the same method that Fermi used. - Is there something else that jumps to mind?

Like, is there something, I know like in terms of papers, like I know the Bitcoin paper is super popular. Is there something interesting to be said about any of the white papers in the cryptocurrency space? - Yeah, the Bitcoin paper was the first paper that we put on for months.

- Why that choice as the first paper? - This was a while ago and it was one of the papers that I read and then kind of explained it to Luiz and there were two other friends that do this journal club with us. And I did some research in cryptography as an undergrad.

And so it was a topic that I was interested in. But even for me, that I had that background, but reading the Bitcoin paper, it took me a few reads to really kind of wrap my head around it. It uses very spartan, precise language in a way. It's like, you feel like you can't take any word out of it without something falling apart.

And it's all there. I think it's a beautiful paper and it's very well-written, of course, but we wanted to try to make it accessible so that anybody that maybe is an undergrad in computer science could go on there and know that you have all the information in that page that you're going to need to understand the mechanics of Bitcoin.

And so I explain the basic public key cryptography that you need to know in order to understand it. I explain, okay, what are the properties of a hash function and how they are useful in this context. Explain what a Merkle tree is. So a bunch of those basic concepts that maybe if you're reading it for the first time and you're an undergrad and you don't know those terms, you're going to be discouraged because maybe, okay, now I have to go and Google around until I understand these before I can make progress in the paper.

And this way it's all there. So there's a magic to... Also to the fact that over time, more people went on there and added further annotations. So the idea that the paper gets easier and more accessible over time, but you're still looking at the original content the way the author intended it to be.

But there's just more context and the toughest bits have more in-depth explanations. - Okay, I think there's just so many interesting papers there. I remember reading the paper that was written by Freeman Dyson on the first time that he explained, he came up with the concept of the Dyson sphere and he put that out.

Again, it's a one-page paper. And what he explained was that eventually if a civilization develops and grows, there's going to be a point where when the resources on the planet are not enough for the energy requirements of that civilization. So if you want to go, the next step is you need to go to the next star and extract energy from that star.

And the way to do it is you need to build some sort of cap around the star that extracts the energy. So he theorized this idea of the Dyson sphere and he went on to kind of analyze how he would build that, the stability of that sphere. Like if something happens, if there's like a small oscillation with that fear collapse into the star or no, what would happen?

And even went on to kind of say that a good way for us to look for signs of intelligent life out there is to look for signals of these Dyson spheres. And because according to the law of second law of thermodynamics, like there's going to be a lot of infrared radiation that is going to be emitted as a consequence of extracting energy from the star.

And we should be able to see those signals of like infrared if we look at the sky. But all these, like from the introduction of the concept, like how to build the Dyson sphere, the problems of like having a Dyson sphere, how to detect how that could be used as a signal for intelligence.

- Wait, really? That's all in the paper? - Yeah, all in one, like one page paper. And it's like, for me, it's beautiful. It's like- - Where was this published? - I don't remember. - It's fascinating that papers like that could be, I mean, the guts it takes to put that all together in a paper form.

You know, that kind of challenges our previous discussion of paper. I mean, papers can be beautiful. You can play with the format, right? - But there's a lot to unpack there. That's like the starting point, but it's beautiful that you're able to put that in one page and then people can build on top of that.

- But the key ideas are there. - Yeah, exactly. - What about, have you looked at any of the big seminal papers throughout the history of science? Like you look at simple, like Einstein papers. Have any of those been annotated? - Yeah, yeah, no. We have some more seminal papers that people will have heard about.

You know, we have the DNA double helix paper on there. We have the Higgs boson paper. Yeah, there's papers that we know that it's, they're not gonna be finding out about them because of us, but it's papers that we think should be more widely read and that folks would benefit from having some annotations there.

And so we also have a number of those. - Yeah, a lot of like discovery papers for fundamental particles and all that. We have a lot of those on Fermat's library. I would like to, we haven't annotated that one, but I'd like to on the Riemann hypotheses. That's a really interesting paper as well.

But we haven't annotated that one, but there's a lot of like more historical landmark papers on the platform. - Have you done Poincaré conjecture with Perlman? - That's too much. - Too much? - That's too much for me, but it's interesting that, you know, and going back to our discussion, like the Poincaré paper was like published on archive and it was not on a journal, like the three papers.

- Yeah, what do you make of that? I mean, he's such a fascinating human being. - Exactly. - I mentioned to you offline that I'm going to Russia. He's somebody I'm really- - You should try to interview him. - Yeah, well, so I definitely will interview him. And I believe I will, I believe I can, I just don't know how to, I know where he lives.

So here, okay. My hope is, my conjecture is that if I just show up to the house and look desperate enough, that, or threatening enough, or some combination of both, that like the only way to get rid of me is to just get the thing done. That's the hope.

- It's actually interesting that you mentioned that because I, after I, so a couple of weeks ago I was searching for like stuff about Perlman online and ended up on this Twitter account of this guy that claims to be Perlman's assistant. And he's like, he has been posting a bunch of pictures like next to Perlman.

You can see like Perlman in a library and he's like next to him, like taking a selfie or like Perlman walking on the street and like, maybe you could reach out to this assistant then I'll send you this Twitter account. - So maybe you're onto something. - No, but going back to like Perlman is super interesting because the fact that he published the proofs on archive is what was also like a way for him to, because he really didn't like the scientific publishing industry and the fact that you had to pay to get access to articles.

And that was a form of like protest. And that's why he published those papers there. I mean, I think Perlman is just a fascinating like character. And for me, it's this kind of ideal of, platonic ideal of what a mathematician should be. It's someone that is, it's just cares about, deeply cares about mathematics.

It cares about fair attribution of disregards money. And like the fact that he published like on archive is a good example of that. - What about the Fields Medal that he turned down the Fields Medal? What's, what's, what do you make of that? - Yeah, I mean, if you look at like the reasons why he rejected the Fields Medal.

So after, so Perlman did a postdoc in the US and when he came back to Russia. - Do you know how good his English is? - I think it's fairly good. - I think it's pretty good. - I think it's really good. - Especially given lectures in American universities.

- But I haven't been able to... - Listen to anything. - Well, certainly not listen, but I haven't been able to get anybody. Cause I know a lot of people have been to those lectures. I'm not able to get a sense of like, yeah, but how strong is the accent?

What are we talking about here? Is this going to have to be in Russian? Is it going to have to be in English? It's fascinating. But he writes the papers in English. - True. But there's so many, like such a fascinating character. And there are a couple of examples like him, like at, I think 28 or 29, he proved like a really famous conjecture called the Salk Conjecture, I believe it was like in a very short four page proof of that.

It was a really big breakthrough. Then he went to Princeton to give a lecture on that. And after the lecture, the chair of the math department at Princeton, a guy called Peter Sarnak, went up to Perlman, was trying to recruit him, trying to offer him a position at Princeton.

And at some point he asked for Perlman's resume. And Perlman responded saying, just gave a lecture on like this really tough problem. Why do you need my resume? Like, I'm not going to send you, like I just proved my value. But going back to the Fields Medal, like when Perlman went to back to Russia, he arrived at a time where the salary of postdocs were so much off in regards to inflation that they were not making any money.

Like people didn't even bother to pick up the checks at the end of the month because they were just like ridiculous. But thankfully he had some money that he had gained while he was doing his postdocs. So he just concentrated on like the Poincare conjecture problem, which he, when he took that, he took it after it was reframed by this mathematician called Richard Hamilton, which posed the problem in a way that it turned into this super like math Olympiad problem with like perfect boundaries, well-defined, and that was perfect for Perlman to attack.

And so he spent like seven years working on that. And then in 2002, he started publishing those papers on archive. And people started jumping on that, reading those papers. And there was like a lot of excitement around that. A couple of years later, there were two researchers, I believe it was, they were from Harvard that took Perlman's work.

They sanded some of the edges and they republished that saying that, you know, based on Perlman's work, they were able to figure out the Poincare conjecture. And then there was at the time at the international conference of mathematics in 2006, I believe that's when they were going to give out the Fields Medal.

There was a lot of debate of like, oh, who's like, who should get the credit for solving this big problem. And for Perlman, it felt really sad that people were even considering that he was not the person that solved that. And the claims that those like researchers, when they published after Perlman, they were false claims that they were the ones, they just sanded a couple of edges, like Perlman did all the really hard work.

And so just the fact that they doubted that Perlman had done that, like was enough for him to say, I'm not interested in this prize. And that was one of the reasons why he rejected the Fields Medal. Then he also rejected the Clay prize. So the Poincare conjecture was one of the millennium prizes.

There was a million dollar prize associated with that problem. And that has to add to do with the fact that for them to attribute that prize, I think it had to be published on a journal. The proof. And again, Perlman's principles of like interfered here. And he also just didn't care about the money.

He was like, Clay, I think was a businessman and he's like, doesn't have to do anything with mathematics. I don't care about these. Like, that's one of the reasons why he rejected it. - Yeah, it's hard to convert into words, but at MIT, I'm distinctly aware of the distinction between when I enter a room, there's a certain kind of music to the way people talk when we're talking about ideas versus what that music sounds like when we're talking, when it's like bickering in the space of like, whether it's politics or funding or egos, it's a different sound to it.

And I'm distinctly aware of the two. And I kind of, sort of, to me personally, happiness was just like swimming around the one that like is the political stuff or the money stuff and all that, or egos. And I think that's probably what Perlman is as well. Like the moment he senses there's any, as with the Fields Medal, like the moment you start to have any kind of drama around credit assignment, all those kinds of things, it's almost not that it's important who gets the credit.

It's like the drama in itself gets in the way of the exploration of the ideas or the fundamental thing that makes science so damn beautiful. - And you can really see that this is also a product of that Russian school of like doing science. And you can see that people were, during the Cold War, a lot of mathematicians, they were not making any money.

They were doing math for the sake of math, like for the intellectual pleasure of like solving a difficult problem. And even if it was a flawed system and there were a lot of problems with that, they were able to actually achieve these. And there were a lot of, and Perlman for me is the perfect product of that.

He just cared about like working on tough problems. He didn't care about anything else. It was just math, pure math. - Yeah, there's a, like for the broader audience, I think another example of that is like professional sports versus Olympics. Especially in Russia, I've seen that clear distinction where, because the state manages so much of the Olympic process in Russia, as people know, with the steroids, yes, yes, yes.

But outside of steroids thing is like the athlete can focus on the pure artistry of the sport. Like not worry about the money, not just in the way they talk about it, the way they think about it, the way they define excellence versus like in the, perhaps a bit of a capitalist system in the United States with American football, with baseball, basketball, so much of the discussion is about money.

Now, of course, at the end of the day, it's about excellence and artistry and all of that. But when the culture is so richly grounded in discussions of money and sort of this capitalistic like merch and businesses and all those kinds of things, it changes the nature of the activity.

And it's in a way that's hard again to describe in words, but when it's purely about the activity itself, it's almost like you quiet down all the noise enough to hear the signal, enough to hear the beauty. Like whenever you're talking about the money, that's when the marketing people come and the business people, the non-creators come and they fill the room and they create drama and they know how to create the drama and the noise, as opposed to the people who are truly excellent at what they do, the person in their arena.

Like when you remove all the money and you just let that thing shine, that's when true excellence can come out. And that was of the few things that worked with the communist system, the Soviet Union, to me at least, as somebody who loves sport and loves mathematics and science, that worked well, removing the money from the picture.

Not that I'm saying poverty is good for science. There's some level in which not worrying about money is good for science. It's a weird, I'm not exactly sure what to make of that because capitalism works really damn well, but it's tricky how to find that balance. - One Fields Medalist that is interesting to look at, and I think you mentioned it earlier, but it's Cédric Villani, which is, might be the only Fields Medalist that is also a politician now.

But so it's this brilliant French mathematician that won the Fields Medal. And after that, he decided that one of the ways that he could have the biggest leverage kind of in pushing science in the direction that he thinks science should go would be to try to go into politics.

And so that's what he did. And he has ran, I'm not sure if he has won any election, but- - I think he's running for mayor of Paris. - Paris or something like that. But it's this brilliant mathematician that before winning the Fields Medal had only been just a brilliant mathematician.

But after that, he decided to go into politics to try to have an impact and try to change some of the things that he would complain about before. So there's that component as well. - Yeah, and I've always thought mathematics and science should be like, like James Bond would, in my eyes, I think be sexier if he did math.

Like we should, as a society, put excellence in mathematics at the same level as being able to kill a man with your bare hands. Like those are both useful features, like that's admirable. It's like, oh, like that makes you like, that makes the person interesting. Like being extremely well read about history or philosophy, being good in mathematics, being able to kill a man with bare hands.

Those are all the same in my book. So I think all are useful for action stars. And I think the society will benefit for giving more value to that. Like one of the things that bothers me about American culture is the, I don't know the right words to use, but like the nerdiness associated with science.

Like, I don't think nerd is a good word in American culture because it's seen as like weakness. There's like images that come with that. And it's fine, you could be all kinds of shapes and colors and personalities, but like, to me, having sophisticated knowledge in science, being good at math, doesn't mean you're weak.

In fact, it could be the very opposite. And so it's an interesting thing because it was very much differently viewed in the Soviet Union. So I know for sure as an existence proof that it doesn't have to be that way. But it... - I also feel like we lack a lot of role models in terms, if you ask people to mention one mathematician that they know that is alive today, I think a lot of people would struggle to answer that question.

- And I also think, I love Neil deGrasse Tyson. But there is, having more role models is good. Like different kinds of personalities. He has kind of fun and it's very, it's like Bill Nye, the science guy, I don't know if you guys know him. - Net spectrum. - That, yeah.

But there's not, like Feynman is no longer there. Those kinds of personalities. - Carl Sagan, man. - Even Carl Sagan, yeah. Like a seriousness that's like not playful. - Not apologetical. - Yeah, exactly. Not apologetic about being knowledgeable. And in fact, the kind of energy where you feel self-conscious about not having thought about some of these questions.

Just like when I see James Bond, I feel bad about that I don't, have never killed a man. Like I need to make sure I fix that. That's the way I feel. So that same way, I want to feel like that way with Carl Sagan talks, I feel like I need to have that same kind of seriousness about science.

Like if I don't know something, I want to know it well. What about Terence Tao? He's kind of a superstar. What are your thoughts about him? - True. He's probably one of the most famous mathematicians alive today. And probably one of, I mean, regardless of like, is of course, he won a Fields Medal, is a really smart and talented mathematician.

It's also like a big inspiration for us, at least for some of the work that we do with Fermat's library. So Terence Tao is known for having a big blog and he's pretty open about like his research. And he also, he tries to make his work as public as possible through his blog posts.

In fact, there's a really interesting problem that got solved a couple of years ago. So Tao was working on a problem, on an Erdos problem actually. So Paul Erdos was this mathematician from Hungary, and he was known for like the Erdos, for a lot of things, but one of the things that he was also known was for the Erdos problem.

So he was always like creating these problems and usually associating prizes with those problems. And a lot of those problems are still open. And there will be, some of them will be open for like maybe a couple of hundred years. And I think that's actually an interesting hack for him to collaborate with future mathematicians.

His name will keep coming up for future generations. But so Tao was working on one of these problems called the Erdos discrepancy and he published a blog post about that problem and he reached like a dead end. And then all of a sudden there was this guy from Germany that wrote like a comment on his blog post saying, "Okay, so this problem is like a Sudoku like flavor and some of the machinery that we're using to solve Sudoku could be used here." And that was actually the key to solve the Erdos discrepancy problem.

So there was a comment on his blog. And I think that for me is an example of like how to do, again, going back to collaborative science online and the power that it has. But Tao is also like pretty public about like some of the struggles of being a mathematician.

And even he wrote about some of the unintended consequences of having extraordinary ability in a field. And he used himself as an example. When he was growing up, he was extremely talented in mathematics from a young age. Like Tao was a person, he won a medal in like one of the IMOs at the age I think was a gold medal at the age of 10 or something like that.

And so he mentioned that when he was growing up, like, and especially in college, when he was in a class that he enjoyed, it just came very natural for him and he didn't have to work hard to just ace the class. And when he found that the class was boring, like it didn't work and he barely passed.

I think in college, he almost failed two classes. And he was talking about that and how he brought those studying habits or like in existence of studying habits when he went to Princeton for his PhD. And in Princeton, when he started kind of delving into more complex problems in classes, he struggled a lot because he didn't have that, those habits, like he wasn't taking notes and he wasn't studying hard when he faced problems.

And he almost failed out of his PhD, he almost failed his PhD exam. And it talks about like having this conversation with his advisor and the advisor pointing out like, you're not, this is not working, you might have to get out of the program. And like how that was a kind of a turning point for him.

And like it was super important in his career. So I think Tao is also like this figure that apart from being just an exceptional mathematician, he's also pretty open about what it takes to be a mathematician and some of the struggles of these type of careers. And I think that's super important.

- In many ways, he's a contributor to open science and open humanity. He's being an open human by communicating. Scott Aronson is another in computer science world who's a very different style, very different style, but there's something about a blog that is authentic and real and just gives us a window into the mind and soul of these brilliant folks.

So it's definitely a gift. Let me ask you about Fermat's library on Twitter, which, I mean, I don't know how to describe it. People should definitely just follow Fermat's library on Twitter. I keep following and unfollowing Fermat's library because it's so, it gives, when I follow it leads me on down rabbit holes often that are very fruitful, but anyway, so the posts you do with the, on Twitter are just these beautiful, are things that reveal some beautiful aspect of mathematics.

Is there something you could say about the approach there? And maybe broadly what you find beautiful about mathematics and then more specifically how you convert that into a rigorous process of revealing that in tweet form. - That's a good point. I think there's something about math that a lot of the mathematical content and papers are like little proofs, has in a way sort of an infinite half-life.

What I mean by that is that if you look at like Euclid's elements, it's as valid today as it was when it was created like 2000 years ago. And that's not true for a lot of other scientific fields. And so in regards to Twitter, I think there's also a very, it's a very under-explored platform from a learning perspective.

I think if you look at content on Twitter, it's very easy to consume. It's very easy to read. And especially when you're trying to explain something, we humans get a dopamine hit if we learn something new. And that's a very, very powerful feeling. And that's why people go to classes when you have a really good professor, it's looking for those dopamine hits.

And that's something that we try to explore when we're producing content on Twitter. Imagine if we could, if you would, on a line to a restaurant, you could go to your phone to learn something new instead of going to a social network. And I think it's very hard to sometimes to kind of provide that feeling because you need to sometimes digest content and put it in a way that it fits 280 characters.

And it requires a lot of sometimes time to do that, even though it's easy to consume, it's hard to make. But once you are able to provide that Eureka moment to people, that's very powerful to get that dopamine hit and you create this feedback cycle and people come back for more.

And in Twitter, compared to an online course or a book, you have a 0% dropout. So people will read the content. So it's part of the creators, the person that is creating the content, if you're able to actually get that feedback cycle, it's super, super powerful. Yeah. But some of this stuff is like, how the heck do you find that?

And I don't know why it's so appealing. This is from a, what is it? A couple of days ago. I'll just read out the number. 2, 3, 4, 5, 6, 7, 8, 9 is the largest prime number with consecutive increasing digits. I mean, that is so cool. That's like some weird like glimpse into some deep universal truth, even though it's just the number.

I mean, that's like so arbitrary. Like why is it so pleasant that that's a thing? But it is in some way, it's almost like it is a little glimpse at some much bigger like. - And I think like, especially if we're talking about science, there's something unique about you go, and with a lot of the tweets, you go sometimes from a state of not knowing something to knowing something.

And that is very particular to science, math, physics, and that again, is extra extremely addictive. And that's how I feel about that. And that's why I think people engage so much with our tweets and go into rabbit holes. And then they, you start with prime numbers and all of a sudden you are spending hours reading number theory things and you go into Wikipedia and you lose a lot of time there.

- Well, the variety is really interesting too. There's human things, there's physics things, there's like numeric things, like I just mentioned, but there's also more rigorous mathematical things. There's stuff that's tied to the history of math and the proofs and there's visual, there's animations, there are looping animations that are incredible, that reveal something.

There's Andrew Wiles on being smart. This is just me now, like ignoring you guys and just going through your Twitter. - No, yeah, we're a bit like math drug dealers. We're just trying to get you hooked. We're trying to give you that hit and trying to get you hooked.

- Yes, some people are brighter than others, but I really believe that most people can really get to quite a good level of mathematics if they're prepared to deal with these psychological issues of how to handle the situation of being stuck. - Yeah. - Yeah, there's some truth to that.

- That's truth. I feel that's like really, it's some truth in terms of research and also about startups. You're stuck a lot of the time before you get to a breakthrough and it's difficult to endure that process of like being stuck because you're not trained to be in that position.

I feel, yeah, that's- - Yeah, most people are broken by the stuckness or like they're distracted. I've been very cognizant of the fact that more and more social media becomes a thing, like distractions become a thing that that moment of being stuck is your mind wants to go do stuff that's unrelated to being stuck and you should be stuck.

I'm referring to small stucknesses, like you're like trying to design something and it's a dead end, basically little dead ends, dead ends of programming, dead ends in trying to think through something. And then your mind wants to like, this is the problem with this like work-life balance culture is like take a break, like as if taking a break will solve everything.

Sometimes it solves quite a bit, but like sometimes you need to sit in a stuckness and suffer a little bit and then take a break, but you definitely need to be, and like most people quit from that psychological battle of being stuck. So success is people who persevere through that.

- Yeah, and in the creative process, that's also true. I was, the other day I was, I think it was reading about, what is his name? Ed Sheeran, like the musician was talking a little bit about the creative process and he was using this analogy of a faucet, like where you, when you turn on a faucet, it's as like the dirty water coming out in the beginning and you just have to keep trusting that at some point your clean, clean, clear water will come out, but you have to endure that process.

Like in the beginning, it's going to be dirty water and just embrace that. - Yeah, actually this, the entirety of my YouTube channel and this podcast have been following that philosophy of dirty water. Like I've been, you know, I do believe that, like you have to get all the crap out of your system first.

And sometimes it's all, sometimes it's all crappy work. I tend to be very self-critical, but I do think that quantity leads to quality for some people. It does for my, the way my mind works is like, just keep putting stuff out there, keep creating and the quality will come as opposed to sitting there waiting, not doing anything until the thing seems perfect.

Cause the perfect may never come. - But just, just on like on, on, on our Twitter profile, I really, and sometimes when you look on, on some of those tweets, they might seem like pretty kind of, you know, why is this interesting? It's like, so raw, like it's just a number, but I really believe that especially with math or physics, it is possible to get everyone to love math or physics, even if you think you hate it.

It's, it's not a function of the student or the person that is on the other side. I think it's just purely a function of like how you explain a hidden beauty that they hadn't realized before. It's not easy, but I think it's like, a lot of the times it's on like on the creator's side to, to be able to like show that beauty to the other person.

- I think some of that is native to, to humans. We just have that curiosity and you look at small toddlers and babies and like them trying to figure things out. And there's just something that is born with us that we, we, we want for that understanding. We want to figure out the world around us.

And, and so, yeah, it shouldn't be like, whether or not people are going, are going to, to enjoy it. Like I, I, I also really believe that everybody has that capacity to fall in love with, with math and physics. - You mentioned startup. What do you think it takes to build a successful startup?

- Yeah, that it's what, what Louise was saying that you need to, to be able to endure being stuck. And, and I think the best way to put it is that startups don't have a linear reward function, right? You oftentimes don't get rewarded for effort. And, and, and, and most of our lives, we go through these processes that do give you those small rewards for effort, right?

In school, you study hard, generally you'll get a good grade and then you good, you get like good grades ever, or you get grades every semester. And so you're, you're slowly getting rewarded and pushed in the right direction for, for startups and startups are, are not the only thing that is like this, but for startups, it's, you know, you can put in a ton of effort into something that, and then get no reward for it, right?

It's, it's like, like Sisyphus boulder, where you were pushing that boulder up the mountain and, and, and you get to the top and then it just rolls all the way back down. And, and so that's something that I think a lot of people are not equipped to deal with and can be incredibly demoralizing, especially if that happens more than, than a few times.

And so, but I think it's absolutely essential to, to power through it because by the nature of startups, it's oftentimes, you know, you're dealing with, with, with non-obvious ideas and things that there might be contrarian. And so you're gonna, you're gonna run into, into that a lot. You're going to do things that are not going to work out and you need to be prepared to deal with that.

But, but if we're not coming out of college, you're, you're just not equipped. I'm not sure if there's a way to train people to deal with those non-linear reward functions, but it's definitely, I think one of the most difficult things to, you know, about doing a startup. And also happens in research sometimes, you know, we're talking about the default state is being stuck.

You just, you know, you don't like, you try things, you get zero results, you close doors, you constantly closing doors until you, you know, find something. And yeah, that is a big thing. - What about sort of this point when you're stuck, there's a kind of decision, whether you, if you have a vision to persist through, through with this direction that you've been going along or what a lot of startups do or businesses is pivot.

How do you decide whether like to give up on a particular flavor of the way you've imagined the design and to like adjust it or completely like alter it? - I think that's a core question for startups that I've asked myself exactly. And like, I've never been able to come up with a great framework to make those decisions.

I think that's really at the core of, yeah, out of a lot of the toughest questions that people that started a company have to deal with. - I think maybe the best framework that I was able to figure out is like when you run out of ideas, you just, you know, you're exploring something is not working, you try it in a different angle, you know, we try a different business model.

When you run out of ideas, like you don't have any more cards, just switch. And yeah, it's not perfect because you also, it's, you have a lot of stories of startups was like, people kept pushing and then, you know, that paid off. And then you have philosophies, there's like fail fast and pivot fast.

So it's, you know, it's hard to, you know, balance these two worlds and understand what is the best framework. - And I mean, if you look at Fermat's library, you're maybe you can correct me, but it feels like you're operating in a space where there's a lot of things that are broken or could be significantly improved.

So it feels like there's a lot of possibilities for pivoting or like, how do you revolutionize science? How do you revolutionize the aggregation, the annotation, the commenting, the community around information of knowledge, structured knowledge? I mean, that's kind of what like Stack Overflow and Stack Exchange has struggled with to come up with a solution.

And they've come up, I think, with an interesting set of solutions that are also, I think, flawed in some ways, but they're much, much better than the alternatives. But there's a lot of other possibilities. If we just look at papers, as we talked about, there's so many possible revolutions and there are a lot of money to be potentially made as revolutions, plus coupled with that, the benefit to humanity.

And so like you're sitting there, like, I don't know how many people are legitimately from a business perspective, playing with these ideas. It feels like there's a lot of ideas here. - True, it varies. - Are you right now grinding in a particular direction? Like, is there a five-year vision that you're thinking in your mind?

- For us, it's more like a 20-year vision in the sense that we've consciously tried to make the decision of... So we run Formats as it's a side project. And it's a separate in the sense like it's not what we're working on full time. But our thesis there is that we actually think that that's a good thing, at least for this stage of Formats library.

And also because some of these projects, if you're coming from a startup framework, you probably try to fit every single idea into something that can change the world within three to five years. And there's just some problems that take longer than that. Right? And so, we were talking about archive and I'm very doubtful that you could grow like archive into what it is today, like within two or three years, no matter how much money you throw at it, there's just some things that can take longer.

But you need to be able to power through the time that it takes. But if you look at it as, "Okay, this is a company, this is a startup, we have to grow fast, we have to raise money," then sometimes you might forego those ideas because of that, because they don't very well fit into the typical startup framework.

And so, for us, Formats, it's something that we're okay with having it grow slowly and maybe taking many years. And that's why we think it's not a bad thing that it is a side project, because it makes it much more acceptable in a way to be able to be okay with that.

- That said, I think what happens is if you keep pushing new little features, new little ideas, I'd feel like there's like certain ideas will just become viral. And then you just won't be able to help yourself, but it'll revolutionize things. It feels like there needs to be, not needs to be, but there's opportunity for viral ideas to change science.

- Absolutely. - And maybe we don't know what those are yet. It might be a very small kind of thing. - Maybe you don't even know if, should this be a for-profit company doing this? - Right. It's the Wikipedia question. - Yeah. There are a lot of questions, like really fundamental questions about this space that we've talked about.

- I mean, you take Wikipedia and you try to run it as a startup, and by now it would have a paywall, you'd be paying 9.99 a month to read more than 20 articles. - I mean, that's one view. The other, the ad-driven model. So, they rejected the ad-driven model.

I don't know if we could, I mean, this is a difficult question. If Archive was supported by ads, I don't know if that's bad for Archive. If Fermat's Library was supported by ads, I don't know. It's not trivial to me. Unlike, I think a lot of people, I'm not against advertisements.

I think ads, when done well, are really good. I think the problem with Facebook and all the social networks are the way, the lack of transparency around the way they use data and the lack of control the users have over their data. Not the fact that data is being collected and used to sell advertisements.

It's the lack of transparency, lack of control. If you do a good job of that, I feel like it's a really nice way to make stuff free. - Yeah. It's like Stack Overflow, right? I mean, I think they've done a good job with that, even though, as we said, they're capturing very little of the value that they're putting out there, right?

But it makes it a sustainable company and they're providing a lot of, it's a fantastic and very productive community. - Let me ask a ridiculous tangent of a question. Luis, you wrote a paper on Game of Thrones, Battle of Winterfell, just as a side little, I'm sorry, I noticed, I'm sure you've done a lot of ridiculous stuff like this.

I just noticed that particular one. By ridiculous, I mean ridiculously awesome. Can you describe the approach in this work, which I believe is a legitimate publication? - So going back to the original, like when we were talking about the backstory of papers and the importance of that. So this is actually, when the last season of the show was airing, this was during a company lunch.

In the last season, there's a really big battle against the forces of evil and the forces of good. And this is called the Battle of Winterfell. And in this battle, there are like these two armies, and there's a very particular thing that they have to take into account is that in the army of dead, like if someone dies in the army of the living, like that person is gonna be reborn as a soldier in the army of the dead.

And so that was an important thing to take into account. - And the initial conditions, as you specify, it's about 100,000 on each side. - Exactly. So I was able to, like based on some images, like on previous episodes to figure out what was the size of the armies.

And so what I want, what we wanted to, what we were theorizing was like, how many soldiers does like a soldier on the army of the living has to kill in order for them to be able to destroy the army of the dead without like losing, because every time one of the good soldiers dies, it's gonna turn into like the other side.

And so we were theorizing that and I wrote a couple of differential equations, and I was able to figure out that based on the size of the armies, I think was the ratio had to be like 1.7. So it had to kill like 1.7 soldiers of like the army of the dead in order for them to win the battle.

- Yeah, that's science. It's the most powerful. And this is also somehow a pitch for like a hiring pitch in a sense like this is the kind of important science you do at lunch. - Exactly. Well, it turned out to be, you know, as for people that have watched these shows, it's like they know that every time you try to predict something that is gonna happen, it's gonna, you're gonna fail miserably.

And that's what happened. So it was not at all important for the show, but we ended up like putting that out and there was a lot of people that shared that. I think it was some like elements of the show, the cast of the show that actually retweeted that and shared that.

So it was fun. - I would love if this kind of calculation happened like during the making of the show or, you know, I love it like in, for example, I now know Alex Garland, the director of Ex Machina, and I love it. And he doesn't seem to be some, not many people seem to do this, but I love it when directors and people who wrote the story really think through the technical details.

Like whether it's knowing like how things, even if it's science fiction, if you were to try to do this, how would you do this? Like Stephen Wolfram and his son were collaborating with the movie Arrival in designing the alien language of how you communicate with the aliens. Like how would you really have a math-based language that could span the alien and being and the human being?

So I love it when they have that kind of rigor. - The Martian was also big on that. Like the book and the movie was all about like, can we actually, is this plausible? Can this happen? It was all about that. - And that can really bring you in.

Like sometimes the small details, I mean, the guy that wrote the Martian book is another book that is also filled with those like things that when you realize that, okay, these are grounded in science can just really bring you in. - Yeah. - Like he has a book about a colony on- - A colony on the moon.

And he goes about like all the details that would be required about setting up a colony in the moon and like things that he wouldn't think about. Like the fact that they would, it's hard to bring like air to the moon so they wouldn't like, how do you make that breathable, that environment breathable?

You need to bring oxygen, but like you probably wouldn't bring nitrogen. So what you do is like, instead of having an atmosphere that is a hundred percent oxygen, you like decrease the pressure so that you have the same ratio of oxygen on earth, but like lowering the pressure here.

And so like things like water boils at the lower temperature. So people would have coffee and the coffee would be colder. Like there was a problem in this environment in the moon. So like, and these are like small things in the book, but I studied physics. So like when I read these, that throws me into like tangents and I start researching that.

And it's like, I really like to read books and watch movies when they go to that level of detail about science. - Yeah. I think Interstellar was one where they also consulted heavily with a number of people. - Yeah, yeah, yeah. - I think even resulted in- - In a couple of papers.

- A couple of papers about like the black hole visualizations and yeah. But there's even more examples of interesting science around like these fantasy. We were reading at some point, like these guys that were trying to figure out if the Tolkien's middle earth, if it was round, if it was like a sphere, if it was like a flat earth.

- Based on the map. - Based on the map and some of the references in the books. And so- - Yeah, we actually, I think we tweeted about that. - Yeah, we did. - Based on the distance between the cities, you can actually prove that that could be like a map of a sphere or like a spheroid and you can actually calculate the radius of that planet.

- That's fascinating. I mean, yeah, that's fascinating. But there's something about like calculating the number, like exactly the calculation you did for the battle of Winterfell is something fascinating about that because that's not like being, that's very mathematical versus grounded in physics. And that's really interesting. I mean, that's like injecting mathematics into fantasy.

There's something magical about that. - I see what you're saying. And that for me, that's why I think it's also when you look at things like Fermat's Last Theorem, like problems that are very kind of self-contained and simple to state. I think like that's the same with that paper.

It's very easy to understand the boundaries of the problem. And that for me, that's why math is so appealing. And those like problems are also so appealing to the general public. It's not that they look simple or that people think that they're easy to like solve, but I feel that a lot of the times they are almost intellectually democratic because everyone understands the starting point.

You know, you look at Fermat's Last Theorem, everyone understands like, this is the universe of the problem. And the same, maybe with that paper, everyone understands, okay, these are the starting conditions. And yeah, the fact that it becomes intellectually democratic, and I think that's a huge motivation for people.

And that's why so many people gravitate towards these like Riemann hypotheses or Fermat's Last Theorem or that simple paper, which is like just one page. It was very simple. - And I just talked to somebody, I don't know if you know who he is, Jocko Willink, who is this person who among many things loves military tactics.

So he would probably either publish a follow-on paper, maybe you guys should collaborate, but he would see the fundamental, the basic assumptions that he started that paper with is flawed because, you know, there's like dragons too, right? There's like, you have to integrate tactics because it's not a homogeneous system.

- I don't take into account the dragons and like- - And he would say tactics fundamentally change the dynamics of the system. And so like- - That's what happened. (laughs) - So yeah, so at least from a scientific perspective, he was right, but he never published, so there you go.

Let me ask the most important question. You guys are from Portugal, both? - Yeah. - So who is the greatest soccer player, footballer of all time? - Yeah, I think we're a little bit biased on this topic, but I mean- - Maradona? - I have a huge, I have a tremendous respect for what- - Here we go.

(laughs) - This is the political- - We can convince you. - I mean, I have tremendous respect for what Ronaldo has achieved in his career. And I think soccer is one of those sports where I think you can get to maybe be one of the best players in the world if you just have like natural talent.

And even if you don't put a lot of hard work and discipline into soccer, you can be one of the best players in the world. And I think Ronaldo is kind of like, of course, he's naturally talented, but he also- - Cristiano Ronaldo, say the football from Portugal. - Exactly, from Portugal, and not the Brazilian in this case.

And Ronaldo came from nothing. He's known from being probably one of the hardest working athletes in the game. And I see that sometimes a lot of these discussions about the best player, a lot of people tend to gravitate towards like, this person is naturally talented and the other person has to work hard.

And so, as if it was bad, if he had to work hard to be good at something. And I think so many people fall into that trap. And the reason why so many people fall into that trap is because if you're saying that someone is good and achieved a lot of success by working hard, as opposed to achieving success because he has some sort of God-given natural talent that you can't explain why the person was born with that.

What does it tell you about you? It tells you that maybe if you work hard on a lot of fields, you could accomplish a lot of great things. And I think that's hard to digest for a lot of people. - So in that way, Ronaldo is inspiring that- - I think so.

- So you find hard work inspiring, but he's way too good looking. That's the- - Yeah, yeah. - I don't like him probably. - No, I like the part of the hard work and like of him being like one of the hardest working athletes in soccer. - So he is to you the greatest of all time?

Is he up to, is he will be number, okay. - I agree. - Do you concur with this? - I disagree. - Well, I definitely disagree. I mean, I like him very much. He works hard. I admire, I admire, you know, what, like he's incredible goal scorer, right? But I, so first of all, Leo Messi, and there was some confusion because I've kept saying Maradona is my favorite player, but I think Leo has surpassed them.

So it's Messi, then Maradona, then Pelé for me. But the reason is, there's certain aesthetic definitions of beauty that I admire, whether it came by hard work or through God given talent or through anything. It doesn't really amount to me. There's certain aesthetic, like genius when I see it to me, and especially it doesn't have to be consistent.

It is in the case of Messi, in the case of Ronaldo, but just even moments of genius, which is where Maradona really shines. - Even if that doesn't translate into like results and goals being scored. - Right, right. And that's the challenge. I'm like, I did that because that's where people that tell me that Leo Messi's never, even on strong teams have led his, the national team, people aspire to the world cup, right?

That's really important. And to me, no, it's the moment, like winning to me was never important. What's more important is the moments of genius. And, but you're talking to the human story and yeah, Cristiano Ronaldo definitely has a beautiful human story. - Yeah. And I think you can't, for me, it's hard to decouple those two.

I don't just look at the list of achievements, but I like how he got there and how he keeps pushing the boundaries. It's like almost 40 and how that sets up an example. Like maybe 10 years ago, I wouldn't have ever imagined that like one of the top players in the world could be a top player at like 37 or.

- But so, and there's an interesting, the human story is really important, but like, if you look at Ronaldo, he's like, he's somebody like kids could aspire to be. But at the same time, I also like Maradona who like is a tragic figure in many ways. It's like the, you know, the drugs, the temper, all of those things, that's beautiful too.

Like I don't necessarily think to me, the flaws are beautiful too. And athletes, I don't think you need to be perfect from a personality perspective. Those flaws are also beautiful. So, but yeah, there is something about hard work and there's also something about the being an underdog and being able to carry a team.

That's an argument from Maradona. I don't know if you can make that argument for Messi and Ronaldo either, 'cause they've all played on superstar teams for most of their lives. So I don't know how, you know, it's difficult to know how they would do when they had to work, like did what Maradona had to do to carry a team on his shoulders.

And Pelé did as well, depending on the context. - Maybe you could argue that with the Portuguese national team, but we have a good team. Yeah, but maybe with what Maradona did with, you know, Naples and a couple other teams, it's incredible. - It speaks to the beauty of the game that, you know, we're talking about all these different players that have, or especially, you know, if you're comparing Messi and Ronaldo, they have such different, you know, styles of play and also even their bodies are so different.

But these two very different players can be at the top of the game. And that's not, there are not a lot of other sports where you have that, you know, like you have kind of a mental image of a basketball player and like the top basketball players kind of fit that mental image and they look a certain way.

But for soccer, there's, it's not so much like that. And I think that's beautiful, but that really adds something to the sport. - Well, do you play soccer yourself? Have you played that in your life? What do you find beautiful about the game? - Yeah, I mean, it's one of the, I'd say it's the biggest sport in Portugal.

And so growing up, we played a lot. - Did you see the paper from DeepMind? I didn't look at it, where they're like doing some analysis on soccer strategy. - Yeah, interesting. - I saved that paper. I haven't read it yet. It's actually, when I was in college, I actually did some research on applying machine learning and statistics in sports.

In our case, we're doing it for basketball. But what they're effectively trying to do was, have you ever watched Moneyball? So they're trying to do something similar, right? Taking that, in this case, basketball, taking a statistical approach to basketball. The interesting thing there is that baseball is much more about having these discrete events that happen kind of in similar conditions.

And so it's easier to take a statistical approach to it. Whereas basketball, it's a much more dynamic game. It's harder to measure. It's hard to replicate these conditions. And so you have to think about it in a slightly different way. And so we were doing work on that and working like with the Celtics to analyze the data that they had.

Like they had these cameras in the arena, they were tracking the players. And so they had a ton of data, but they didn't really know what to do with it. And so we were doing work on that. And soccer is maybe even a step further. It's a game where you don't have as many...

In basketball, you have a lot of field goals. And so you can measure success. Soccer, it's more of a poisson process almost, where it's like you have a goal or two in a game. In terms of metrics, I wonder if there's a way... And I've actually have thought about this in the past, never coming up with any good solution.

If there's a way to definitively say whether it's messy or not, they're the greatest of all time. Honestly, measure, like convert the game of soccer into metrics, like you said, baseball. But those moments of genius, if it's just about goals or passes that led to goals, that feels like it doesn't capture the genius of the play.

You have more metrics, for instance, in chess, and you can try to understand how hard of a move that was. There's Bobby Fischer has this move that I think it's called the move of the century, where you have to go so deep into the tree to understand that that was the right move and you can quantify how hard it was.

So it'd be interesting to try to think of those types of metrics, but say, yeah, for soccer. - Computer vision unlocks some of that for us. That's one possibility. - I have a cool idea, a computer vision product, Lex, that you could build for soccer. - Let's go. I'm taking notes.

- If you could detect the ball and imagine that... It seems totally doable right now, but if you could detect when the ball enters one of the goals and just had a crowd cheering for you when you're playing soccer with your friends, every time you score a goal, or you had the Champions League song going on, and having that, you go play soccer with your friends, you just turn that on and there's a computer vision program analyzing the ball.

- Detects the ball. - Detects the ball every time there's a goal. If you miss, there's the fans are reacting to that. - And then... - Should be pretty simple by now. I think there's an opportunity there. Just throwing that. - I'm going to go all out. By the way, I did...

I've never released... I was thinking of just putting it on GitHub, but I did write exactly that, which is the trackers for the players, for the bodies of the player. This is the hard part, actually. The detection of player bodies and the ball is not hard. What's hard is very robust tracking through time of each of those.

So I wrote a tracker that's pretty damn good. - Is that open source? You open source? - No, I've never released it. - Interesting. - Because I thought I need to... This is the perfection thing, because I knew it was going to be like... It's going to pull me in and it wasn't really that done.

And so I've never actually been part of a GitHub project where it's like really active development. - Interesting. - And I didn't want to make it... I knew there's a non-zero probability that it will become my life for like a half a year. That's just how much I love soccer and all those kinds of things.

And ultimately, it will be all for just the joy of analyzing the game, which I'm all for. - I remember you also, in one of the episodes, you mentioned that you did also a lot of eye tracking analysis on Joe Rogan. - That was the research side of my life.

- Interesting. And you have that library, right? You kind of downloaded all the episodes. - Allegedly. And of course I didn't, if you're a lawyer and listening to this. - I was listening to the episode where you mentioned that and I was actually... There was something that I might ask you for access to that, to allegedly that library.

But I was doing some, not regarding like eye tracking, but I was playing around with analyzing the distribution of silences on one of the Joe Rogan episodes. So I did that for the Elon conversation, where it's like, you just take all the silences, after Joe asked the question and Elon responded, and you plot that distribution and see how that looks like.

- Yeah, I think there's a huge opportunity, especially with long form podcasts, to do that kind of analysis, bigger than Joe. - Exactly. It has to be a fairly unedited podcast so that you don't cut the silences. - So one of the benefits I have doing this podcast is, what we're recording today is there's individual audio that's being recorded.

So I have the raw information, when it's published, it's all combined together, and individual video feeds. So even when you're listening, which I usually do, I only show one video stream, I can track your blinks and so on. But ultimately, the hope is you don't need that raw data, because if you don't need the raw data for whatever analysis you're doing, you can then do a huge number of podcasts.

It's quickly growing now, the number, especially comedians, there's quite a few comedians with long form podcasts, and they have a lot of facial expressions, they have a lot of fun and all those kinds of things, and it's prone for analysis. - There's so many interesting things. That idea actually sparked because I was watching a Q&A by Steve Jobs, and I think it was at MIT.

And then, people did a talk there, and then the Q&A started, and people started asking questions. I was working while listening to it, and someone asked a question, and he goes on a 20-second silence before answering the question. I had to check if the video hadn't paused or something.

And I was thinking about if that is a feature of a person, how long on average you take to respond to a question, and if it's like- - Oh, that's fascinating. - Has to do with how thoughtful you are, and if that changes over time. - But it also could be, this is a really fascinating metric, 'cause it also could be, it's certainly a feature of a person, but it's also a function of the question.

- True. - Like, if you normalize to the person, you can probably infer a bunch of stuff about the question. So, it's a nice flag, like it's a really strong signal, the length of that silence, relative to the usual silence they have. So, one, the silence is a measure of how thoughtful they are, and two, the particular silence is a measure how- - Thoughtful the question was.

- Thoughtful the question was. It's really interesting. I mean, yeah. - Yeah. I just analyzed Elon's episode, but I think there's room for exploration there. - I feel like the average for comedians would be, I mean, the time would be so small, 'cause you're trained to, I would think you're reacting to hecklers, you're reacting to all sorts of things, you have to be so quick.

- Maybe, maybe. Yeah. But some of the greatest comedians are very good at sitting in the silence. I mean, there's Louis CK, they play with that, 'cause you have a rhythm. Like Dave Chappelle, a comedian who did a Joe show recently, he has a, especially when he's just having a conversation, he does long pauses.

It's kind of cool. It's one of the ways to have people hang in your word, is to play with the pauses. To play with the silences and the emphasis and mid-sentence. There's a bunch of different things that it'd be interesting to really, really analyze, but still soccer to me is, that one's fascinating.

I just want a conclusive, definitive statement about, 'cause there's so many soccer highlights of both Messi and Ronaldo. I just feel like the raw data is there. - In jest, and decide. - Definitive statement. 'Cause you don't have that with Pelé and Mardona. - Yeah, true. - But here's a huge amount of high-def data, the annoying, the difficult thing, and this is really hard for tracking.

And this is actually where I kind of gave up. I didn't really give much effort, but I gave up to the way that highlights or usually football match filmed is they switch the camera. So they'll do a different switch perspective. So you have to, it's a really interesting computer vision problem.

When the perspective is switched, you still have a lot of overlap about the players, but the perspective is sufficiently different that you have to like recompute everything. So there's two ways to solve this. One is doing it the full way where you're constantly doing the slam problem. You're doing a 3D reconstruction the whole time, and projecting into that 3D world.

But you could also, there could be some hacks. Then I wonder like some trick where you can hop, like when the perspective shifts, do a high probability tracking hops from one object to another. But I thought, especially in exciting moments when you're passing players, like you're doing a single ball dribble across players and you switch perspective, which is when they often do when you're making a run on goal.

If you switch your perspective, it feels like that's going to be really tricky to get right automatically. But in that case, for instance, I feel like if somebody released that data set, where it's like, you just have all like this data set, a massive data set of all these games from say Ronaldo and Messi, like, and you just add that in like whatever, CSV format and some publicly available data set like that.

I feel like people would just, there would be so many cool things that you could do with it. And you just set it free and then like the world would like do its thing. And then like interesting things would come out of it. - By the way, I have this data set.

So the two things I've did of this scale is soccer. So it's body pose and ball tracking for soccer. And then it's the pupil tracking and blink tracking for, it was Joe Rogan and a few other podcasts that I did. So those are the two data sets I have.

- Did you analyze any of your podcasts? - No, I think I really started doing this podcast after doing that work and it's difficult to, maybe I'd be afraid of what I find. I'm already annoyed with my own voice and video, like editing it. But perhaps that's the honest thing to do.

'Cause one useful thing about doing computer vision about myself is like, I know what I was thinking at the time. So you can start to connect the particular, the behavioral peculiarities of like the way you blink, the way you squint, the way you close your eyes, like talking about details.

It's like, for example, I just closed my eyes. Is that a blink or no? Like figuring that out in terms of timing, in terms of the blink dynamics is tricky. It's very doable. I think there's universal laws about what is a blink and what is a closed eye and all those things, plus makeup and eyelashes.

I actually have annoyingly long eyelashes. So I remember when I was doing a lot of this work, I would cut off my eyelashes, which when like, especially it was funny, like female colleagues were like, what the fuck are you doing? Like, no, keep the eyelashes. But it, cause it got in the way, made the computer vision a lot more difficult, but.

- Super interesting topics. - Yeah. But speaking about the one, still on the topic of the data sets for sports, there's one, one paper and I actually edited it on format and it's, it was published in 90s, 90s, I believe, 90s or 80s, I forget. But the researcher was effectively looking at the hot end phenomena in basketball, right?

So whether like the fact that you just made a field goal, if, you know, if on your next attempt, if you're more likely to make it or not. And it was super interesting cause I mean, he pulled like, I think a hundred undergrads and I think from Stanford and Cornell and asking people like, do you, do you think that's, that do you have a higher likelihood of making your free throw if you just, just made one?

And I think it was like 68, 68% said yes, they believe that. And then he looked at the data and this was back in, as I said, like few decades ago. And so I think he had the data set of about, he looked at it specifically for free throws and he had a data set of about 5,000 free throws.

And and effectively what he found was that specifically in the case of free throws, he didn't, for the aggregate data, he didn't find that he couldn't really spot that correlation, that hot end correlation. So if you made the first one, you weren't more likely to, to make the second one.

What he did find was that they were just better at the second one, because you just got like maybe a tiny practice and you just attempted once and then, and then you're going to be better at the next one. And then I, I, then I went and there's a data set on Kaggle that has like 600,000 free throws.

And I re-ran the, the same computations and, and, and confirmed, like, you can see a very clear pattern that they're just better at their second free throw. - That's interesting. Cause I think there's similar, that kind of analysis is so awesome. Cause I think with tennis, they have like, like a fault, like when you serve, they have analysis of like, are you most likely to miss the second serve if you missed the first, obviously.

I think that's the case. So that integrates, that's so cool when psychology is converted into metrics in that way. And in sports, it's especially cool because it's such a constrained system that you can really study human psychology because it's repeated, it's constrained. So many things are controlled, which is something you rarely have in, in the wild psychological experiments.

So it's cool. Plus everyone loves it. Like sports is really cool to analyze. - People actually care about the results. - Yeah. I still think, well, like I, I know we'll definitely publish this work on Messi versus Ronaldo and objective, fully objective. - I'd love to peer review. Yeah, this is very true.

This is not past peer review. Let me ask sort of an advice question to the young folks. You've explored a lot of fascinating ideas in your life. You built a startup, worked on physics, worked in computer science. What advice would you give to young people today in high school, maybe early college about life, about career, about science and mathematics?

- I remember like, I read, like, I remember reading that Poincaré was once asked by a French journal about his advice for young people and what was his teaching philosophy. And he said that like, one of the most important things that parents should teach their kids is how to be enthusiastic in regards to like the mysteries of the world.

And that he said, like, striking that balance was actually one of the most important things between like in education, you know, you want to have your kids be enthusiastic about the mysteries of the world, but you also don't want to traumatize them. Like if you really force them into something.

And I think like, especially if you're young, I think you should be curious. And I think you should explore that curiosity to the fullest, to the point where you even become almost as an expert on that topic. And you might start with something that it's small, like you might start with, you know, you're interested in numbers and how to factor numbers into primes.

And then all of a sudden you go and you're like lost in number theory and you discover cryptography. And then all of a sudden you're buying Bitcoin. And I think you should do this. You should really try to fulfill this curiosity and you should live in a society that allows you to fulfill this curiosity, which is also important.

And I think you should do this not to get to some sort of status or fame or money, but I think this is the way, this iterative process, I think this is the way to find happiness. And I think this is also allows you to find the meaning for your life.

I think it's all about like being curious and being able to fulfill that curiosity and that path to fulfilling that, your curiosity. - Yeah, the star small, the fire build is kind of interesting way to think about it. - And you never know where you're going to end up.

It's like, for us, Formaz is just a really good example. We started like by doing this as an internal like thing that we did in the company. And then we started putting out there and now a lot of people follow it and know about it. And so... - And you still don't know where Formaz library is going to end up actually.

- True, exactly. So yeah, I think that would be my piece of advice with very limited experience, of course, but yeah, I agree. - I agree. I mean, is there something from particular, Joao, from the computer science versus physics perspective, do you regret not doing physics? Do you regret not doing computer science?

Which one is the wiser, the better human being? This is Messi versus Ronaldo. Those are very, I don't know if you would agree, but they're kind of different disciplines. - True. - Yeah. - Very much so. I actually, I had that question in my mind. I took physics classes as an undergrad or like besides what I had to take.

And it's definitely something that I considered at some point. And I do feel like later in life, that might be something that I'm not sure if regret is the right word, but it's kind of something that I can imagine in an alternative universe, what would have happened if I had gone into physics.

I try to think that like, well, it depends on what your path ends up being, but that it's not super important, right? Like exactly what you decide to major on. I think Tim Urban, the blogger had a good visualization of this where it's like, he has a picture where you have all sorts of paths that you could pursue in your life.

And then maybe you're in the middle of it. And so there's maybe some paths that are not accessible to you, but like the tree that is still in front of you, it gives you a lot of optionality. And so- - There's two lessons to learn from that. Like we have a huge number of options now, and probably you're just one to reflect, like to try to derive wisdom from the one little path you've taken so far may be flawed because there's all these other paths you could have taken.

So it's like, so one, it's inspiring that you can take any path now. And two, it's like the path you've taken so far is just one of many possible ones. But it does seem that physics and computer science both open a lot of doors and a lot of different doors.

- True. - It's very interesting. - It is. I feel like in this case, and especially in our case, because I could see the difference. I studied, I went to college in Europe and João went to college here in the US. So I could see the difference in like, the European system is more rigid in the sense that when you decide to study physics, you don't have a lot, especially in the early years, you don't have a lot of, you can't choose to take like a class from like computer science course or something like that.

You don't have a lot of freedom to explore in that sense in university, as opposed to here in the US where you have more freedom. And I think that's important. I think that's what constitutes a good kind of educational system is one that gravitates towards the interests of a student as you progress.

But I think in order for you to do that, you need to explore different areas. And I felt like if I had a chance to take say more computer science class when I was in college, I would have probably have taken those classes. But yeah, but I ended up like focusing maybe too much in physics.

And I think here, at least my perception is that you can explore more fields. - But there is a kind of, it's funny, but physics can be difficult. So I don't see too many computer science people then exploring into physics. It's like the one, not the one, but one of the beneficial things of physics, it feels like it, what was it, Rutherford that said like, like basically that physics is the hard thing and everything is easy.

So like there's a certain sense once you've figured out some basic like physics, that it's not that you need the tools of physics to understand the other disciplines. It's that you're empowered by having done difficult shit. I mean, the ultimate, I think is probably mathematics there. - Yeah, true.

- So maybe just doing difficult things and proving to yourself that you can do difficult things, whatever those are. - That's net positive, I believe. - Net positive. - Yeah. And I think like, before I started a company, I worked in the financial sector for a bit. And I think having a physics background, I felt I was not afraid of learning finance things.

And I think when you come from those backgrounds, you are generally not afraid of stepping into other fields and learning about those because I feel they've learned a lot of difficult things and that's an added benefit, I believe. - This was an incredible conversation, Luis, João. We started with, who do we start with?

Feynman, ended up with Messi and Ronaldo. So this is like the perfect conversation. It's really an honor that you guys would waste all this time with me today. It was really fun. Thanks for talking. - Thank you so much for having us. Yeah, thank you so much. - Thanks for listening to this conversation with Luis and João Batala.

And thank you to Skiff, SimplySafe, Indeed, NetSuite and Four Sigmatic. Check them out in the description to support this podcast. And now let me leave you with some words from Richard Feynman. "Nobody ever figures out what life is all about and it doesn't matter. Explore the world. Nearly everything is really interesting if you go into it deeply enough." Thank you for listening.

I hope to see you next time.