back to index

Sarah Pan, teenage AI wizard


Whisper Transcript | Transcript Only Page

00:00:00.080 | Hi, I'm Jeremy Howard from Answer.ai, and I'm joined here by Sarah. Sarah is Answer.ai's
00:00:06.640 | first ever fellow, first founding member of our fellowship program, and today we're going to hear
00:00:14.800 | a bit about Sarah's background and experiences. I think you'll find her a really inspiring
00:00:20.640 | and amazing human being. Thanks for joining us, Sarah.
00:00:24.000 | Thank you for having me.
00:00:26.800 | So let me just rewind a little bit for those who haven't had the pleasure like I have of
00:00:32.240 | getting to know you and who you are. Sarah is a person who has a remarkable achievement
00:00:41.360 | of getting published in NeurIPS, the world's premier academic AI conference, whilst still at high school.
00:00:50.080 | And I met Sarah at NeurIPS. The paper she published is extremely high-quality work and extremely
00:00:59.680 | interesting. And I was thrilled to discover that her background included learning from my courses,
00:01:08.240 | and fast AI courses, and we stayed in touch. Everything I saw Sarah do was incredibly impressive,
00:01:18.240 | and so I wanted more than anything else to find an opportunity to work with her. So we offered her
00:01:22.800 | this role as a Answer.ai fellow. So Sarah, maybe to start with, could you summarize for a
00:01:35.040 | somewhat layperson audience the research that you were presenting at NeurIPS? What was the purpose of
00:01:42.480 | the work, of the research you were doing, and what did you find?
00:01:45.760 | NeurIPS: Sure. So I started this project, I think, towards the beginning of 2023. At that time,
00:01:55.680 | ChatGPT was still relatively, or I think, yeah, relatively young. And we were interested in doing
00:02:02.960 | this thing with multi-step reasoning. So at the time, these large language models really struggled with
00:02:08.560 | producing coherent and correct mathematical reasoning and logic. And so we were interested in finding ways
00:02:16.320 | to improve that. So one of the papers that we kind of based our approach off of was from OpenAI. They
00:02:25.040 | used these reward models as verifiers to kind of verify whether these steps in a multi-step reasoning
00:02:31.920 | process would be correct or incorrect. And what was interesting about their approach was that instead
00:02:37.360 | of having one reward model that would produce a single reward, kind of telling you whether the
00:02:43.280 | solution was correct or incorrect or somewhat of a mix between the two, this would basically grade each
00:02:49.600 | step. So it would tell you where exactly the response went wrong. And yeah, it would give you
00:02:55.920 | kind of a more specific kind of feedback in terms of that. Let's talk about OpenAI for a moment then
00:03:03.360 | before coming back to how you built on this, because this is very much in the news now. So OpenAI
00:03:08.080 | recently released a pair of new models called 01 Preview and 01 Mini, which are dramatically better at
00:03:16.640 | reasoning than previous models. And they seem to be taking advantage of a training system that uses this
00:03:24.240 | kind of approach. OpenAI have explained how they've been explicitly taught how to or been given feedback
00:03:31.280 | on their reasoning steps and have learned to become better reasoners as a result. So it's interesting
00:03:36.720 | that it seems this, you picked a problem which I think not coincidentally has turned out to be really
00:03:45.760 | important in practice. Right. And so kind of more on that, the actual origins of our project weren't on
00:03:53.600 | mathematical reasoning. Originally, we had been looking at sort of bias in models in terms of the more
00:04:00.800 | like kind of like ethical implications. And we realized that a lot of times, like statements were
00:04:07.840 | not logically connected with each other. And so this kind of led us down the route of logical reasoning.
00:04:12.960 | But yeah, that was kind of a tangent. But back to the actual... How did you build on top of then that OpenAI work?
00:04:19.040 | Did you take it in a different direction or you kind of took it a little further in the same direction or what?
00:04:23.520 | Well, I mean, the PRM paper.
00:04:27.840 | Yeah. So basically what we did was they basically only trained the rarefires.
00:04:33.840 | They showed that the more process reasoning oriented ones were better than the objective,
00:04:42.800 | sort of holistic ones. And so we decided to take it one step further and actually use it in an RLHF pipeline
00:04:49.360 | to update a sort of like completion model. And so...
00:04:53.120 | I know this is a second nature to use, so let's just rewind a bit. So RLHF is reinforcement learning
00:04:57.440 | from human feedback. So this is the third step in the process that OpenAI used to build stuff like the ChatGPT model,
00:05:05.920 | where they get human beings to give feedback about different possible answers to a question,
00:05:12.880 | and it helps the model learn better what human beings are looking for.
00:05:17.760 | Right. So typically a human is given two model responses.
00:05:24.480 | the human rates it or picks the better one out of the two. And then based on that preference data,
00:05:31.360 | like a reward model was trained. But for us, we were... There was a dataset released by OpenAI that had
00:05:36.880 | that like process feedback where individual steps were rated incorrect or correct by human graders,
00:05:43.280 | and we trained a reward model based on that. And so in terms of like the actual RLHF pipeline,
00:05:50.640 | there were a few key changes that we had to make just because the setup of our, I guess, process was
00:05:56.720 | a little different. I can talk on those a little bit more, but I don't know if you have any more like
00:06:01.440 | conceptual questions or any clarification. Oh, that's okay. So I just wanted to kind of... I
00:06:06.400 | think it helps to start, you know, with like where... what you've been working on recently. So basically
00:06:11.680 | it's taking a really kind of classic branch of research that turned out to be of great practical
00:06:17.200 | import, which is moving from just like, is this a good answer or not? To, is this a good series of
00:06:22.400 | steps to get to an answer or not? And then building on that to then say, okay, with that, we can then
00:06:27.440 | hopefully come up with a model which can actually create the correct steps and as a result doesn't
00:06:34.080 | have to jump straight to the answer straight away. And yeah, in TLDR, your model
00:06:43.920 | had encouraging results. I mean, you weren't able to train it on the large amount of compute you would
00:06:48.800 | have liked since high school researchers don't have access to open AI computers. But it looked good
00:06:55.760 | enough at least to get an Europe's paper. So actually, I wanted to kind of then step back a bit to say,
00:07:02.960 | like, right, I've... through my work with fast.ai, I would say, like, every year there's maybe a couple of
00:07:14.960 | high school students who I come across who do fast.ai and, you know, become an extremely competent
00:07:26.560 | practitioner. And so I've kind of got to know a few folks like you. And generally, the experience is like,
00:07:35.920 | a bit tricky because there aren't other people at school who have the same interests and capabilities,
00:07:43.760 | either teachers or students. So, you know, how did this work? How did this work for you? How did you
00:07:53.360 | get into artificial intelligence and how did you then follow that interest, even though I assume
00:08:01.120 | you did not have a group of peers who were following that interest with you?
00:08:06.160 | Yeah, you're definitely right about the peers thing. I think I started towards the end of middle
00:08:13.760 | school beginning of high school. What kind of age is that? I don't know what middle school is in America.
00:08:18.560 | Oh, okay. Yeah, I think I was, like, 13, 14 years old. And so I had taken, like, an algebra one course
00:08:26.000 | at, like, the highest math level. But my brother was taking the course because he's much older than me.
00:08:32.880 | He's, you know, interested in this sort of thing. And so I was like, hey, let me, you know, watch and let me
00:08:37.840 | follow along. And I think, like, the one thing that kind of parried me through the course, if you will,
00:08:44.320 | is, was actually, like, the kind of ease of, like, I don't know, just watching the lectures, playing
00:08:49.520 | around with the notebooks. It was something... For those we haven't done it, I'll just explain
00:08:53.200 | there that, like, fast AI course is a bit unusual in that even though we do get to the point of, like,
00:08:58.960 | re-implementing recent research papers, we do it in this top-down way where you start out, like,
00:09:07.600 | building stuff, basically. So it's, it's... And then you kind of only gradually get into the
00:09:13.360 | complexity of the underlying implementation as needed. So I guess you're saying that was, like,
00:09:18.000 | helpful for you as a teenager in working through it? Yes, very. I think I liked starting out with,
00:09:24.960 | like, the higher level ideas just because I got to see how they kind of worked. I feel like if you
00:09:30.400 | probably started with, I don't know, like, backpropagation or, like, any of those, like, you know, fancy
00:09:36.000 | calculus things. That's right, calculus. Hmm. But, yeah, it was just nice being able to start out with,
00:09:43.360 | like, a bigger picture view of things. And that was definitely interesting. I had no idea. Did you
00:09:48.160 | and your brother help each other? Were you kind of co-studying? We kind of co-studied in, like,
00:09:54.400 | the very beginning. I think we watched, like, a lecture or two together. But as time progressed,
00:09:58.880 | like, I realized that I didn't really, like, need him there next to me. Things were more or less
00:10:03.120 | understandable. I think there was, like, a forum, too. If I had any, like, questions, I could, like,
00:10:07.440 | go and, like, see if there was any other issues. Have you used that? Hmm?
00:10:10.880 | You used the forum? I was mostly, like, like a watcher. Yeah. Like, afar, kind of, if anyone had
00:10:18.400 | similar questions, that would be super helpful. Wasn't too much of a poster myself. But it's very,
00:10:25.200 | it was very cool being able to see, like, a bunch of people kind of following along and doing their
00:10:29.440 | own thing. Like, even if, like, my close friends weren't doing it, the only person I knew was doing
00:10:34.640 | was my brother, who's, like, much older than me. It was nice, like, to think about having, like, that
00:10:39.680 | sort of, like, I don't know, there's just, like, a community of people online that are just really
00:10:44.000 | curious and driven. Exactly. I remember a teenage girl from Bangladesh emailing me to say, like,
00:10:52.800 | hey, I've just finished the fast AI course. This seems really important and really good. But, like,
00:10:58.560 | all my friends think it's weird. And, like, I don't know anybody else that, like, really uses computers
00:11:02.880 | much. So, like, is it okay? Am I doing something wrong? And I was like, no, it'd be better than okay.
00:11:08.960 | It's amazing, you know. And she went on to get a fellowship at Google and got flown out to Silicon
00:11:13.920 | Valley. But, you know, it can be, yeah, it can be a bit weird. And actually, I remember you telling me
00:11:22.320 | that going to Neurips was pretty important for you because suddenly you were surrounded by real-life
00:11:28.640 | versions of these people and realized, like, oh, it's not this figure, right? That must have been amazing.
00:11:32.640 | Yeah. It was really cool. It was, like, one thing to, like, I don't know, kind of just, like, see the
00:11:38.560 | presentations and talk and talks and whatever, but, like, actually walking through, like, the poster
00:11:43.280 | halls and getting to, like, ask people about their research. Like, wow, these people, like, are also
00:11:47.840 | interested in the same things and they, like, spend so much time doing, like, or, like, asking the same
00:11:52.800 | questions that I'm interested in and things like that. This is a conference with, like, 10,000 AI
00:11:57.680 | researchers or something all coming into New Orleans and everywhere you go around the conference
00:12:02.320 | building. All the pubs and restaurants are full of AI researchers. Yeah, it's pretty insane.
00:12:09.920 | I felt a bit the same way when I first went to San Francisco, you know, coming from Australia, where
00:12:15.680 | my interests were... I didn't really know anybody else who had them. And it was nice to suddenly find
00:12:21.840 | myself in a town with lots of other people who thought that what I was doing was interesting and
00:12:26.160 | they cared. Like, it matters, you know. But you must have had a lot of tenacity because you were going
00:12:33.600 | for, what, four or five years before you got to that point. I mean, that must have been a huge amount
00:12:38.800 | of work for you. Yeah, I think, like, just, like, it was kind of my, like, side hustle. Like, after
00:12:47.200 | school I'd have, you know, extra free time. This would be sort of my thing. I think part of it was also,
00:12:52.720 | like, AI was kind of getting, you know, hot in the news. I was responsible for a ton of things,
00:12:58.560 | really cool papers, really cool breakthrough algorithms. And it felt like I was, like, in on
00:13:04.080 | something, you know, because more or less, like, I could understand, like, what was going on. I think,
00:13:11.280 | like, there was, like, this headline, I think, about, like, AlphaFold. And I was like, wait, like, I know,
00:13:16.080 | kind of, more or less, what's going on there. And so that kind of helped.
00:13:19.920 | But that was the protein folding model from Google, right?
00:13:22.720 | Yes. And it had, like, one, it had solved the protein folding problem as, like,
00:13:30.960 | the headlines kind of reported. And so that kind of, like, helped me kind of stay interested,
00:13:38.320 | stay driven. And I guess it kind of just, like, blossomed once I, like, hit high school. And I was
00:13:44.880 | in this program called MIT Primes that typically pairs students with, like, graduate students or
00:13:51.440 | professors, high school students with graduate students and professors to do this sort of, like,
00:13:55.680 | research project. And so for me, I was paired with Vlad Lielin, who was a PhD student at UMass Lowell,
00:14:04.000 | and now he's graduated with his PhD.
00:14:05.680 | And he became your co-author on your NeurIPS paper.
00:14:08.480 | Yes, who is my mentor and co-author and basically taught me everything I need to know about AI research.
00:14:14.480 | But through that program, I was able to kind of take on, like, a new perspective. Because this entire
00:14:22.080 | time, I'd been kind of, like, a student. I'd kind of, like, learned about these things, kind of played
00:14:26.720 | with them, kind of analyzed them from, like, the top down, picked apart their inner workings more or less.
00:14:32.240 | But now I was kind of presented with the question of, like, so what, like, what's next, right?
00:14:36.640 | And so...
00:14:37.280 | Can I just rewind for a bit, like, about the side hustle? Because, like, as a homeschooling dad,
00:14:43.840 | this kind of interests me, because I think it might be somewhat more aligned with how I think about,
00:14:49.120 | you know, school education. You know, rather than spending more time doing your math homework,
00:14:57.360 | you are spending time doing something that's not in the curriculum. But if I'm thinking about it,
00:15:03.840 | I'm thinking, okay, you're doing fast AI. Lesson four, you have to use the chain rule in calculus,
00:15:12.240 | which you would not have done in algebra one. So you must have been, like, I don't know,
00:15:19.680 | what were you doing, like, going to Khan Academy and stuff like that to kind of learn this stuff. And,
00:15:23.440 | like, I imagine that then by the time you covered it in high school, it would have been reasonably
00:15:28.800 | straightforward for you because you've been applying it for years at that point.
00:15:31.600 | Right. I realized, like, I'd taken multivariable at some point in high school, and I was like,
00:15:36.640 | wait, like, these concepts are very, very familiar to me. But I think, like, you're right in the sense
00:15:42.640 | that, like, for me, a lot of things were nonlinear. I think, like, in American schooling,
00:15:48.480 | especially, like, there's a very, you know, straightforward progression from algebra one,
00:15:53.360 | to, you know, geometry, to whatever, to eventually calculus. And I think, like, because I was just so
00:16:00.160 | interested in, like, AI, machine learning and things like that, I kind of just, like, took the initiative
00:16:05.920 | and learned things that I needed to know. And so when it actually came time for math class, like, a lot of
00:16:12.000 | things felt out of order just because they weren't... Yeah, and you would have been better at that math
00:16:17.200 | because you knew what was important, you knew what it was for. Like, as you know, my eight-year-old has
00:16:24.400 | recently started learning derivatives, and it's definitely not in her curriculum, but, like,
00:16:30.480 | we don't follow the curriculum because I figure, like, yeah, if you do fun and interesting stuff,
00:16:35.440 | then it all comes around eventually. And, you know, yeah, I'm just trying to think, like, what about,
00:16:41.200 | like, I mean, coding even more so, right? Like, in the US curriculum, I don't think there's much coding
00:16:49.280 | at high school generally, but in the fast AI course, you would have had to have become reasonably proficient
00:16:54.800 | in Python to succeed there. So I think I was lucky enough to have, like, a few Python classes at my
00:17:02.800 | middle school, but then again, very different ways of, like, thinking about programming. I think a lot of,
00:17:08.880 | like, introductory programming classes, I don't know, they're very, like, game-centered. I feel like a lot
00:17:15.840 | of the, like, intro classes, you just, like, solve a puzzle and that's, like, the entire course. But I think,
00:17:22.880 | like, what's helpful about, I don't know, fast AI and maybe just, like, Python in general is that
00:17:28.640 | it's pretty readable. I think a lot of the notebooks for fast AI, they were on Colab, so I didn't have
00:17:34.800 | to worry about, like, a terminal or, like, VS code and things that were that, you know, more complex.
00:17:40.480 | Yeah, you just pick a link and start typing.
00:17:42.080 | Go button.
00:17:42.880 | Yeah.
00:17:43.360 | Exactly. So that was nice. And then also, there were, like, a ton of resources online at that point. Even now,
00:17:50.000 | like, you know, there's, like, ChatGPT, there's AI magic, like, you can literally just, you know,
00:17:55.520 | it's the barrier to access is definitely much lower nowadays. But it's just something that I had to
00:18:02.560 | learn on the fly. And thankfully, there were enough resources to do so.
00:18:06.800 | Well, sorry, people watching this, but AI magic's an internal tool at AnswerAI, so you don't get to use it.
00:18:12.800 | Maybe I shouldn't have said that.
00:18:14.800 | That's fine. No, the fact that it exists is a known thing. I talked about it on the
00:18:19.040 | podcast ages ago, so.
00:18:21.680 | Glad I haven't spilled the beans.
00:18:23.120 | Now you've just increased the mystery.
00:18:24.800 | Okay, so let's fast forward a little bit. So you actually started working part-time at AnswerAI
00:18:37.840 | before you even finished high school. And then, you know, we had a bit of a conversation about what next
00:18:46.240 | for you. And you felt like MIT was the right place for you to be. So you've been there now for
00:18:52.640 | a few months, I guess. And I think you're living there at MIT, right? I'd love to hear, like, what's
00:19:02.640 | your experience been so far? Because, like, again, like, I'm imagining first year at MIT,
00:19:07.440 | there's still not going to be loads of people, either students or teachers you're dealing with, who
00:19:12.720 | know enough about AI to be published at Europe's. Like, have you found, like, yeah, tell me about the
00:19:19.120 | experience in general, and also whether you've, you know, kind of how you're, you know, whether you're
00:19:24.000 | mainly continuing to kind of work with Vlad, and of course, we'll talk shortly about your NSAI work,
00:19:29.520 | and how that's what you're learning and experiencing and stuff at MIT.
00:19:34.640 | Yeah, so I think when I first started talking to you about AnswerAI, kind of working there,
00:19:40.880 | doing a fellowship, I really seriously considered taking a gap year, so that I could, you know, pursue
00:19:48.240 | my research projects a little more seriously, have a little more time on my hands. But ultimately, I decided
00:19:55.040 | against it just because I feel like MIT is such, like, I hate to say this, it sounds like so basic,
00:20:02.480 | but it's such, like, a great place to be. I think you're definitely right. Like, again, none of my
00:20:09.040 | peers, or like, not none, but like, the majority of my peers aren't really interested in AI. But they're
00:20:16.160 | amazing people. They, I don't know, there's just a very broad variety in like, what they're interested
00:20:21.360 | in. And they're all super, super passionate about it. And I think like, if I think about my future,
00:20:27.280 | I don't know, five, 10 years down the line, I'm not exactly sure where I'll be. Maybe it'll be doing
00:20:34.160 | AI research, maybe it'll be entirely something else. And I think having that exposure to people that are
00:20:40.160 | interested in other things, people that are the top of their field and whatever that might be is,
00:20:44.240 | it's very exciting. And so, yeah, I guess, like, I guess that's it. Yeah.
00:20:51.040 | And so at the same time, so you're kind of, you know, got this multi-pronged thing going on where
00:20:56.480 | you're working at Answer.ai, you're also doing an MIT, I guess, where your new side hustle used to be
00:21:02.320 | fast AI. You've been working with Austin Huang, who is one of the absolute top AI practitioners
00:21:10.560 | in the world. He was a project leader at Google, you know, building the retrieval stuff for Google's
00:21:18.080 | deep learning models that became Gemini. He's the creator of Gemma.cpp.
00:21:25.200 | Yeah. What's it been like, you know, getting involved with working with folks like Austin,
00:21:33.600 | you know, what's been surprising about it or, you know, what kind of, what's the experience been like
00:21:40.000 | there? Yeah. So I think at first I was definitely a little intimidated. The laundry list of cool things
00:21:47.680 | Austin has done is just, like, insane. But I realized, like, over time, like, Austin and, like,
00:21:55.360 | the rest of the crew at Answer.ai, they're very down to earth, very happy to, like, explain concepts,
00:22:00.000 | very happy to answer questions. And I think that's, like, one of the things I've enjoyed the most about
00:22:05.360 | working with Austin. So for some context, we put together WebGPU puzzles. And so through that,
00:22:14.480 | I had to kind of, like, let's just take a look at that then, shall we? Yeah, sure.
00:22:18.960 | So here's GPU CPP.answer.ai. Okay. Okay. So you and Austin built this together. Yes. And
00:22:29.120 | let's talk a bit about what this is. So this is like, let's go through a few here. These are some pretty
00:22:38.880 | hardcore things. Basically, what you're doing here, things like a 1D convolution, a prefix sum,
00:22:45.120 | you're asking people to write code, fill in something to, you know, what is a fairly complex
00:22:58.720 | thing written in hardcore, low-level GPU code? Sorry. Kind of, yeah, hardcore, low-level GPU code,
00:23:11.840 | which, if it's taught at university at all, it would be, like, probably in, like, a master's
00:23:18.000 | program or something like that. It's just, like, it's extremely, extremely, extremely advanced.
00:23:24.880 | And you're also doing it in, like, a brand new framework that Austin invented. So
00:23:33.440 | it kind of reminds me a bit of Ada Lovelace in some ways, like, you know, she was the first computer
00:23:41.360 | programmer. And she was programming a computer that had just been invented.
00:23:46.640 | I mean, how do you... Yeah, I mean, is that your background? You've got years of hardcore,
00:23:56.080 | CUDA, GPU, low-level background of programming. Like, how did you implement and contribute to this
00:24:03.360 | project? No, I think my entire GPU programming background was probably, like, the three hours I
00:24:11.920 | spent solving Sasha Rush's GPU programming puzzles. And that was really fun for me. But in terms of just,
00:24:19.120 | like, getting this, putting this together, I think, like, learning on the fly again was, like,
00:24:24.800 | a huge thing for me. And also just, like, knowing that the framework wasn't complete. And so
00:24:30.080 | if I had, like, any questions, that Austin would be more than happy to answer them.
00:24:34.000 | It's nice to have the guy that wrote the framework there to ask questions about... Exactly.
00:24:37.280 | Yeah. But another thing that I kind of just, like, reminded myself of was that... So for a little
00:24:44.720 | context, Sasha Rush is, I believe, a professor at Cornell. He wrote these GPU puzzles. You can run them
00:24:50.960 | in CoLab. But the idea is basically to kind of distill down the ideas behind, you know, this sort of
00:24:58.080 | paradigm of, like, parallel GPU computing, and then have it presented in a very fun, interactive,
00:25:03.680 | sort of, like, puzzle-solving way. And so through that, I kind of learned, like, hey, like, this is
00:25:11.040 | how you actually think in parallel, essentially. And so when I was implementing these, the web GPU version,
00:25:19.040 | I kind of reminded myself that, like, hey, even though this is, like, a new framework, things are a
00:25:24.160 | little hacky here and there, like, the essential idea is the same. And for me, the goal for these puzzles
00:25:32.560 | was the same, kind of, along the lines of the experience I had with Sasha Rush's, kind of just
00:25:38.000 | distilling down those really core ideas into something that beginners could digest, anyone
00:25:44.720 | could digest, and really have, like, a fun time with. And so that's kind of, like, my, like, overarching
00:25:50.480 | philosophy when it came to these puzzles. Yeah, I mean, I just think it's very... It is inspiring,
00:25:56.240 | though, right, because I hear so many people say, you can't expect to make any progress in a career in
00:26:04.320 | AI research or practice without a PhD. And, you know, you are so slow, Sarah, you still don't have a PhD,
00:26:13.920 | you know, my goodness. And yet, you know, like, many, many, many fast AI alumni, I'm not saying
00:26:21.600 | fast AI is the unique way to do this, but it's a very common way to do this, you've forged a great
00:26:26.640 | path. And, you know, to be honest, I did encourage you to consider joining Answer AI full-time at least for a
00:26:33.360 | year, like, your strength, you know, your portfolio is strong enough that we're just about the hardest
00:26:42.160 | company in the world to get into, and we're offering you a position. So it's, it's definitely works out,
00:26:48.320 | you know, I do want to ask, though, like, a lot of people judge on things other than your pure
00:26:58.000 | demonstrated competence. A lot of people will at least implicitly judge based on, you know, the fact
00:27:05.360 | that somebody is very young, or the fact that somebody is female. And so I'm thinking, for example,
00:27:13.360 | my friend Tanishk, who finished high school when he was 10, and he wanted to go to university. And he, him and
00:27:23.280 | his family faced a lot of prejudice, you know, and struggled to get somebody to understand that
00:27:29.600 | he was ready to go to university. And when he finally found a, you know, professor ready to take
00:27:38.400 | him on, they were right, he shone at university, and he went on to finish a really impressive PhD, you know,
00:27:48.000 | so I think he beat you to that one. Although you beat him to a New York's publication. So tough,
00:27:53.840 | tough crowd. But, you know, it was a struggle. And, you know, I'm kind of curious to hear, like, it sounds
00:28:00.800 | like, maybe, I guess, in their case, his family were trying to get him to learn through a really classic,
00:28:10.800 | okay, stop going to school, start going to university. By doing it kind of more on your own,
00:28:17.520 | online, like, has that mean there's been like less of a struggle for you? Or has there been times you've
00:28:22.960 | found it a bit challenging to get people to take you seriously based, you know, on your age or gender or
00:28:28.240 | anything else? Well, I think the nice thing about having, you know, fast AI, answer AI, kind of my research
00:28:36.000 | as my side hustle, is that, you know, it's kind of less of, I guess, like, a part of my life as in
00:28:47.120 | comparison to maybe what Tanish did, like he graduated high school when he was 10, which is like an insane
00:28:53.360 | thing to do. And I mean, really, the only option for him afterwards was like college. I feel like, generally,
00:29:00.640 | like my trajectory and kind of like my main hustle kind of route was very, more or less typical. And
00:29:09.200 | so not having not being like, I didn't really kind of face not being taken seriously or things like
00:29:15.840 | that, just because it kind of was more of a side hustle thing for me. And, but to be fair, like if
00:29:22.240 | it were to be a main hustle, I guess I could see, definitely see how that might kind of suck. I
00:29:29.600 | think there are definitely people out there that are like programs out there, especially like the MIT
00:29:35.520 | Primes program that tries to kind of help out these like younger students kind of unlock that, I guess,
00:29:42.080 | potential. Yeah, so-called like Vlad put in his time to invest in your success.
00:29:47.760 | Yeah, he was telling me, he thought like, at the beginning, when the directors of the program reached
00:29:53.280 | out to him and asked if you would help, he was like, this is going to be like a ginormous waste of
00:29:57.040 | time. But me and the student that he mentored before me, both published papers. And so that was like,
00:30:05.040 | kind of eye opening, I think, like, I've heard around MIT, and just, in general, like, you do need to
00:30:11.760 | have like, or like, the word on the street is that you need to have a PhD, you need to have some sort
00:30:16.880 | of like higher level education in order to do these more like researchy and more like, I don't know,
00:30:21.520 | like interesting jobs. But I think that that's sort of, it's a little odd to me, because...
00:30:26.880 | Well, I mean, like, I think you're getting a bit of a like, you're seeing how it is now,
00:30:32.880 | right? And what you're saying is true now. Maybe at some points, it was a little less true. But like,
00:30:39.280 | right now, there are a few people in the world who have more experience with modern AI
00:30:46.080 | than you, with your whatever, five or six years, like, it's, it's on the higher end,
00:30:53.440 | you know, and for somebody who spent 20 years learning, I don't know, Lisp and Prologue and
00:31:00.720 | Bayesian statistics and whatever, they're probably going to take five or six years to unlearn that enough
00:31:07.920 | to be able to start where you were when you were 13. So, like, for me, this is like a bit of a super
00:31:14.080 | power we have at Answer AI, is we basically totally ignore academic credentials and entirely focus on,
00:31:21.200 | like, portfolio, you know. And yeah, a lot of the folks that we end up wanting to work with are
00:31:30.400 | younger, you know, and often they never went to a fancy educational institution like MIT,
00:31:38.240 | because they were off forging their own thing. So I think, yeah, I think it is like,
00:31:42.560 | I think your experience should become the new norm, it'll probably take decades to get there,
00:31:49.120 | you know. Yeah, make the side hustle the main hustle. Yeah.
00:31:53.760 | Yeah, I mean, I guess another thing about, you know, being a woman in tech in general,
00:32:04.560 | it helps to have people who, you know, can help mentor you and so forth. So it's nice, like, we've got
00:32:11.520 | Audrey at Answer AI who was the founding president of Pi Ladies and probably knows more about how to deal
00:32:18.960 | with all that stuff than probably anybody in the world. So, yeah.
00:32:23.520 | Yeah, and I think, like, tying back to the MIT thing, I think another part of the reason why
00:32:29.520 | I wanted to come to MIT was there are just so many people interested in tech
00:32:34.640 | that are women. And it's definitely hard to find anywhere else. We have to keep it 50/50 for, I guess,
00:32:41.760 | whatever's sake. So it's a good high concentration of, I think, like, women that are interested.
00:32:48.560 | No, I mean, it matters. Of course it matters. Absolutely. And it's important to end up somewhere
00:32:53.360 | where you're going to do your best work surrounded by people you can do your best work with. Yeah.
00:33:00.080 | So coming back then to your research, what's it been like for you seeing, you know, O1 come out,
00:33:14.320 | this, you know, this renewed interest in kind of reasoning traces and reasoning combined with
00:33:20.080 | reinforcement learning? Yeah. Well, you know, how are you feeling about this research field that you got
00:33:28.480 | into a year or two ago? And are you planning to keep pushing on that yourself? Or is it like,
00:33:34.800 | oh, it's too mainstream now, time to do something else? No, I think this is definitely very exciting
00:33:40.320 | for me. I think, like, hey, like, I chose the right path that people at OpenAI are doing it.
00:33:44.640 | But I think, like, in general, there are, along with OpenAI that I actually don't know what's going on
00:33:50.480 | with the hood apart from, like, Twitter rumors. But there have been kind of, like, a bunch of papers,
00:33:55.280 | two, I believe there's one called, like, QuietStar, a few more along similar lines that kind of deal
00:34:01.280 | with the same problem. And I think this is, like, one of the one of the bigger questions with large
00:34:07.120 | language models is that can we, like, because large language models, they, they kind of infer things,
00:34:13.520 | like, they have this sort of representation of language and therefore sort of, like, logic and
00:34:22.080 | knowledge. And somehow we need to, somehow they kind of put those things together in a way that is
00:34:28.080 | coherent. But how do we actually, like, extract the things that we want, right? So I think that's gonna
00:34:35.280 | stay a big question, whether it's reasoning, whether it's truth, whether it's kind of, like, knowing,
00:34:43.120 | like, what things go with what things, like, I don't know if that was clear. But yeah, I can, like,
00:34:48.720 | rerun that, too, if you want to. No, no, it's all good. Absolutely. Okay.
00:34:53.200 | Yeah, I mean, I mean, the reason I asked is I, I sometimes wonder that with myself, I'm like, I
00:35:00.560 | kind of like to poke at the areas that no one else is doing, you know, so like, if somebody is, sometimes
00:35:08.640 | if something, if something becomes super popular, like, okay, I'm bad, like, even with, like, ULM fit,
00:35:14.880 | you know, that was the first kind of real large language model, you know, language model kind of
00:35:20.240 | application of that kind. And then suddenly everybody was doing it. And I kind of felt like,
00:35:27.040 | okay, maybe I don't need to be doing this now, because lots of people are doing it, like, there's
00:35:30.880 | something else I could, you know, uncover. It's, I guess that's a tricky thing with research is, like,
00:35:37.200 | do you want to keep uncovering in the same direction? Or do you want to explore new directions?
00:35:42.880 | Are there other kind of directions of research you've been thinking, like, oh, maybe when you're done with
00:35:46.640 | this, you'd like to go in this other way? Well, I mean, just being an answer, I feel like
00:35:52.880 | I've been exposed to sort of a lot of the different types of like research. I did like a few tangential
00:35:59.280 | tasks, just kind of like exploring the different projects that was that were going on. And one of
00:36:04.080 | the things that I haven't touched in a long time was kind of creating like an application, like a research
00:36:10.480 | buddy sort of type of thing. Um, and it's not as researchy as like my previous projects. Um,
00:36:20.720 | but I think like being able to kind of create something with like an end user in mind is something
00:36:27.440 | else that I want to, um, definitely pursue during my time. That's kind of our thing, isn't it? You know,
00:36:32.880 | our thing at Answer.ai is all about research and development with an end user task and a specific
00:36:40.240 | end user in mind. Exactly. And so kind of being able to still kind of experiment with different
00:36:48.080 | ideas, having that sort of researchy aspect, but also, I don't know, working towards a very tangible
00:36:52.960 | purpose, um, would be very cool. So, so before we wrap up, I guess like, um, anybody who's watched to this
00:37:01.760 | point at the interview, I'm thinking, you know, uh, are we thinking, well, I want to be more like Sarah,
00:37:09.040 | you know, I think you're a really inspiring role model for people. Um, and a lot of people, uh, where you
00:37:16.400 | were four or five years ago, you know, they're, they're just starting out probably feeling pretty
00:37:21.600 | intimidated, um, pretty overwhelmed and thinking like, well, I can't do this. I need a PhD or,
00:37:30.880 | you know, whatever academic who's a family member or something like, I guess like, what, what would your
00:37:38.240 | advice be to 14 year old Sarah? You know, if she was feeling, there must have been times you were
00:37:44.720 | feeling like, ugh, I can't do this, or I don't want to do this, or is this worth doing, or like,
00:37:49.200 | am I too weird, nobody else is doing this? Like, what, what kind of advice or feedback or thoughts
00:37:53.920 | would you pass along to that, to that Sarah? I would say kind of just to know what you're curious
00:38:02.880 | in and know what you're interested in and just go for it, kind of full send. Um, I'm glad you said that,
00:38:10.640 | so like, it's, it's about like, you, you, you actually have, you actually have to care and enjoy
00:38:16.560 | it. Like it's, if you treat it as a grind, you're probably not going to do it, right?
00:38:20.240 | Exactly. Like AI is probably not going to be everyone's cup of tea. Um, but I was fortunate
00:38:26.160 | enough to have discovered it quite early on. And I just knew that I was very interested in it. And
00:38:32.000 | obviously there were times where, like you said, like things got hard. I wanted to like drop everything.
00:38:37.280 | Yeah. So sometimes you've got to grind it out. You've got to, you've got to grind it out. But
00:38:41.120 | I think like, know the reason why you were interested in the first place. I think if you're interested in
00:38:46.960 | anything, um, there's got to be something very genuine, something very, I don't know, compelling
00:38:51.840 | about it. Um, so just remembering back to the first time, um, you were interested in it. Um, and kind of
00:38:58.320 | just knowing that like, there is an end, um, goal in mind that you'll probably reach if you keep at it.
00:39:04.800 | Well, I'm definitely gonna share this story with my nine-year-old daughter who loves coding and she
00:39:12.560 | loves math and she loves calculus. And, uh, I think she'll find this very inspiring. Uh, and I hope that
00:39:21.200 | other kids and adults do as well, but I know you're, uh, definitely an inspiration to me, Sarah. So
00:39:28.000 | thank you so much for this time and for being involved.
00:39:30.240 | Sarah. Thank you so much for having me again. Okay. Bye. Bye.