Hi, I'm Jeremy Howard from Answer.ai, and I'm joined here by Sarah. Sarah is Answer.ai's first ever fellow, first founding member of our fellowship program, and today we're going to hear a bit about Sarah's background and experiences. I think you'll find her a really inspiring and amazing human being. Thanks for joining us, Sarah.
Thank you for having me. So let me just rewind a little bit for those who haven't had the pleasure like I have of getting to know you and who you are. Sarah is a person who has a remarkable achievement of getting published in NeurIPS, the world's premier academic AI conference, whilst still at high school.
And I met Sarah at NeurIPS. The paper she published is extremely high-quality work and extremely interesting. And I was thrilled to discover that her background included learning from my courses, and fast AI courses, and we stayed in touch. Everything I saw Sarah do was incredibly impressive, and so I wanted more than anything else to find an opportunity to work with her.
So we offered her this role as a Answer.ai fellow. So Sarah, maybe to start with, could you summarize for a somewhat layperson audience the research that you were presenting at NeurIPS? What was the purpose of the work, of the research you were doing, and what did you find? NeurIPS: Sure.
So I started this project, I think, towards the beginning of 2023. At that time, ChatGPT was still relatively, or I think, yeah, relatively young. And we were interested in doing this thing with multi-step reasoning. So at the time, these large language models really struggled with producing coherent and correct mathematical reasoning and logic.
And so we were interested in finding ways to improve that. So one of the papers that we kind of based our approach off of was from OpenAI. They used these reward models as verifiers to kind of verify whether these steps in a multi-step reasoning process would be correct or incorrect.
And what was interesting about their approach was that instead of having one reward model that would produce a single reward, kind of telling you whether the solution was correct or incorrect or somewhat of a mix between the two, this would basically grade each step. So it would tell you where exactly the response went wrong.
And yeah, it would give you kind of a more specific kind of feedback in terms of that. Let's talk about OpenAI for a moment then before coming back to how you built on this, because this is very much in the news now. So OpenAI recently released a pair of new models called 01 Preview and 01 Mini, which are dramatically better at reasoning than previous models.
And they seem to be taking advantage of a training system that uses this kind of approach. OpenAI have explained how they've been explicitly taught how to or been given feedback on their reasoning steps and have learned to become better reasoners as a result. So it's interesting that it seems this, you picked a problem which I think not coincidentally has turned out to be really important in practice.
Right. And so kind of more on that, the actual origins of our project weren't on mathematical reasoning. Originally, we had been looking at sort of bias in models in terms of the more like kind of like ethical implications. And we realized that a lot of times, like statements were not logically connected with each other.
And so this kind of led us down the route of logical reasoning. But yeah, that was kind of a tangent. But back to the actual... How did you build on top of then that OpenAI work? Did you take it in a different direction or you kind of took it a little further in the same direction or what?
Well, I mean, the PRM paper. Yeah. So basically what we did was they basically only trained the rarefires. They showed that the more process reasoning oriented ones were better than the objective, sort of holistic ones. And so we decided to take it one step further and actually use it in an RLHF pipeline to update a sort of like completion model.
And so... I know this is a second nature to use, so let's just rewind a bit. So RLHF is reinforcement learning from human feedback. So this is the third step in the process that OpenAI used to build stuff like the ChatGPT model, where they get human beings to give feedback about different possible answers to a question, and it helps the model learn better what human beings are looking for.
Right. So typically a human is given two model responses. the human rates it or picks the better one out of the two. And then based on that preference data, like a reward model was trained. But for us, we were... There was a dataset released by OpenAI that had that like process feedback where individual steps were rated incorrect or correct by human graders, and we trained a reward model based on that.
And so in terms of like the actual RLHF pipeline, there were a few key changes that we had to make just because the setup of our, I guess, process was a little different. I can talk on those a little bit more, but I don't know if you have any more like conceptual questions or any clarification.
Oh, that's okay. So I just wanted to kind of... I think it helps to start, you know, with like where... what you've been working on recently. So basically it's taking a really kind of classic branch of research that turned out to be of great practical import, which is moving from just like, is this a good answer or not?
To, is this a good series of steps to get to an answer or not? And then building on that to then say, okay, with that, we can then hopefully come up with a model which can actually create the correct steps and as a result doesn't have to jump straight to the answer straight away.
And yeah, in TLDR, your model had encouraging results. I mean, you weren't able to train it on the large amount of compute you would have liked since high school researchers don't have access to open AI computers. But it looked good enough at least to get an Europe's paper. So actually, I wanted to kind of then step back a bit to say, like, right, I've...
through my work with fast.ai, I would say, like, every year there's maybe a couple of high school students who I come across who do fast.ai and, you know, become an extremely competent practitioner. And so I've kind of got to know a few folks like you. And generally, the experience is like, a bit tricky because there aren't other people at school who have the same interests and capabilities, either teachers or students.
So, you know, how did this work? How did this work for you? How did you get into artificial intelligence and how did you then follow that interest, even though I assume you did not have a group of peers who were following that interest with you? Yeah, you're definitely right about the peers thing.
I think I started towards the end of middle school beginning of high school. What kind of age is that? I don't know what middle school is in America. Oh, okay. Yeah, I think I was, like, 13, 14 years old. And so I had taken, like, an algebra one course at, like, the highest math level.
But my brother was taking the course because he's much older than me. He's, you know, interested in this sort of thing. And so I was like, hey, let me, you know, watch and let me follow along. And I think, like, the one thing that kind of parried me through the course, if you will, is, was actually, like, the kind of ease of, like, I don't know, just watching the lectures, playing around with the notebooks.
It was something... For those we haven't done it, I'll just explain there that, like, fast AI course is a bit unusual in that even though we do get to the point of, like, re-implementing recent research papers, we do it in this top-down way where you start out, like, building stuff, basically.
So it's, it's... And then you kind of only gradually get into the complexity of the underlying implementation as needed. So I guess you're saying that was, like, helpful for you as a teenager in working through it? Yes, very. I think I liked starting out with, like, the higher level ideas just because I got to see how they kind of worked.
I feel like if you probably started with, I don't know, like, backpropagation or, like, any of those, like, you know, fancy calculus things. That's right, calculus. Hmm. But, yeah, it was just nice being able to start out with, like, a bigger picture view of things. And that was definitely interesting.
I had no idea. Did you and your brother help each other? Were you kind of co-studying? We kind of co-studied in, like, the very beginning. I think we watched, like, a lecture or two together. But as time progressed, like, I realized that I didn't really, like, need him there next to me.
Things were more or less understandable. I think there was, like, a forum, too. If I had any, like, questions, I could, like, go and, like, see if there was any other issues. Have you used that? Hmm? You used the forum? I was mostly, like, like a watcher. Yeah. Like, afar, kind of, if anyone had similar questions, that would be super helpful.
Wasn't too much of a poster myself. But it's very, it was very cool being able to see, like, a bunch of people kind of following along and doing their own thing. Like, even if, like, my close friends weren't doing it, the only person I knew was doing was my brother, who's, like, much older than me.
It was nice, like, to think about having, like, that sort of, like, I don't know, there's just, like, a community of people online that are just really curious and driven. Exactly. I remember a teenage girl from Bangladesh emailing me to say, like, hey, I've just finished the fast AI course.
This seems really important and really good. But, like, all my friends think it's weird. And, like, I don't know anybody else that, like, really uses computers much. So, like, is it okay? Am I doing something wrong? And I was like, no, it'd be better than okay. It's amazing, you know.
And she went on to get a fellowship at Google and got flown out to Silicon Valley. But, you know, it can be, yeah, it can be a bit weird. And actually, I remember you telling me that going to Neurips was pretty important for you because suddenly you were surrounded by real-life versions of these people and realized, like, oh, it's not this figure, right?
That must have been amazing. Yeah. It was really cool. It was, like, one thing to, like, I don't know, kind of just, like, see the presentations and talk and talks and whatever, but, like, actually walking through, like, the poster halls and getting to, like, ask people about their research.
Like, wow, these people, like, are also interested in the same things and they, like, spend so much time doing, like, or, like, asking the same questions that I'm interested in and things like that. This is a conference with, like, 10,000 AI researchers or something all coming into New Orleans and everywhere you go around the conference building.
All the pubs and restaurants are full of AI researchers. Yeah, it's pretty insane. I felt a bit the same way when I first went to San Francisco, you know, coming from Australia, where my interests were... I didn't really know anybody else who had them. And it was nice to suddenly find myself in a town with lots of other people who thought that what I was doing was interesting and they cared.
Like, it matters, you know. But you must have had a lot of tenacity because you were going for, what, four or five years before you got to that point. I mean, that must have been a huge amount of work for you. Yeah, I think, like, just, like, it was kind of my, like, side hustle.
Like, after school I'd have, you know, extra free time. This would be sort of my thing. I think part of it was also, like, AI was kind of getting, you know, hot in the news. I was responsible for a ton of things, really cool papers, really cool breakthrough algorithms.
And it felt like I was, like, in on something, you know, because more or less, like, I could understand, like, what was going on. I think, like, there was, like, this headline, I think, about, like, AlphaFold. And I was like, wait, like, I know, kind of, more or less, what's going on there.
And so that kind of helped. But that was the protein folding model from Google, right? Yes. And it had, like, one, it had solved the protein folding problem as, like, the headlines kind of reported. And so that kind of, like, helped me kind of stay interested, stay driven. And I guess it kind of just, like, blossomed once I, like, hit high school.
And I was in this program called MIT Primes that typically pairs students with, like, graduate students or professors, high school students with graduate students and professors to do this sort of, like, research project. And so for me, I was paired with Vlad Lielin, who was a PhD student at UMass Lowell, and now he's graduated with his PhD.
And he became your co-author on your NeurIPS paper. Yes, who is my mentor and co-author and basically taught me everything I need to know about AI research. But through that program, I was able to kind of take on, like, a new perspective. Because this entire time, I'd been kind of, like, a student.
I'd kind of, like, learned about these things, kind of played with them, kind of analyzed them from, like, the top down, picked apart their inner workings more or less. But now I was kind of presented with the question of, like, so what, like, what's next, right? And so... Can I just rewind for a bit, like, about the side hustle?
Because, like, as a homeschooling dad, this kind of interests me, because I think it might be somewhat more aligned with how I think about, you know, school education. You know, rather than spending more time doing your math homework, you are spending time doing something that's not in the curriculum.
But if I'm thinking about it, I'm thinking, okay, you're doing fast AI. Lesson four, you have to use the chain rule in calculus, which you would not have done in algebra one. So you must have been, like, I don't know, what were you doing, like, going to Khan Academy and stuff like that to kind of learn this stuff.
And, like, I imagine that then by the time you covered it in high school, it would have been reasonably straightforward for you because you've been applying it for years at that point. Right. I realized, like, I'd taken multivariable at some point in high school, and I was like, wait, like, these concepts are very, very familiar to me.
But I think, like, you're right in the sense that, like, for me, a lot of things were nonlinear. I think, like, in American schooling, especially, like, there's a very, you know, straightforward progression from algebra one, to, you know, geometry, to whatever, to eventually calculus. And I think, like, because I was just so interested in, like, AI, machine learning and things like that, I kind of just, like, took the initiative and learned things that I needed to know.
And so when it actually came time for math class, like, a lot of things felt out of order just because they weren't... Yeah, and you would have been better at that math because you knew what was important, you knew what it was for. Like, as you know, my eight-year-old has recently started learning derivatives, and it's definitely not in her curriculum, but, like, we don't follow the curriculum because I figure, like, yeah, if you do fun and interesting stuff, then it all comes around eventually.
And, you know, yeah, I'm just trying to think, like, what about, like, I mean, coding even more so, right? Like, in the US curriculum, I don't think there's much coding at high school generally, but in the fast AI course, you would have had to have become reasonably proficient in Python to succeed there.
So I think I was lucky enough to have, like, a few Python classes at my middle school, but then again, very different ways of, like, thinking about programming. I think a lot of, like, introductory programming classes, I don't know, they're very, like, game-centered. I feel like a lot of the, like, intro classes, you just, like, solve a puzzle and that's, like, the entire course.
But I think, like, what's helpful about, I don't know, fast AI and maybe just, like, Python in general is that it's pretty readable. I think a lot of the notebooks for fast AI, they were on Colab, so I didn't have to worry about, like, a terminal or, like, VS code and things that were that, you know, more complex.
Yeah, you just pick a link and start typing. Go button. Yeah. Exactly. So that was nice. And then also, there were, like, a ton of resources online at that point. Even now, like, you know, there's, like, ChatGPT, there's AI magic, like, you can literally just, you know, it's the barrier to access is definitely much lower nowadays.
But it's just something that I had to learn on the fly. And thankfully, there were enough resources to do so. Well, sorry, people watching this, but AI magic's an internal tool at AnswerAI, so you don't get to use it. Maybe I shouldn't have said that. That's fine. No, the fact that it exists is a known thing.
I talked about it on the podcast ages ago, so. Glad I haven't spilled the beans. Now you've just increased the mystery. Okay, so let's fast forward a little bit. So you actually started working part-time at AnswerAI before you even finished high school. And then, you know, we had a bit of a conversation about what next for you.
And you felt like MIT was the right place for you to be. So you've been there now for a few months, I guess. And I think you're living there at MIT, right? I'd love to hear, like, what's your experience been so far? Because, like, again, like, I'm imagining first year at MIT, there's still not going to be loads of people, either students or teachers you're dealing with, who know enough about AI to be published at Europe's.
Like, have you found, like, yeah, tell me about the experience in general, and also whether you've, you know, kind of how you're, you know, whether you're mainly continuing to kind of work with Vlad, and of course, we'll talk shortly about your NSAI work, and how that's what you're learning and experiencing and stuff at MIT.
Yeah, so I think when I first started talking to you about AnswerAI, kind of working there, doing a fellowship, I really seriously considered taking a gap year, so that I could, you know, pursue my research projects a little more seriously, have a little more time on my hands. But ultimately, I decided against it just because I feel like MIT is such, like, I hate to say this, it sounds like so basic, but it's such, like, a great place to be.
I think you're definitely right. Like, again, none of my peers, or like, not none, but like, the majority of my peers aren't really interested in AI. But they're amazing people. They, I don't know, there's just a very broad variety in like, what they're interested in. And they're all super, super passionate about it.
And I think like, if I think about my future, I don't know, five, 10 years down the line, I'm not exactly sure where I'll be. Maybe it'll be doing AI research, maybe it'll be entirely something else. And I think having that exposure to people that are interested in other things, people that are the top of their field and whatever that might be is, it's very exciting.
And so, yeah, I guess, like, I guess that's it. Yeah. And so at the same time, so you're kind of, you know, got this multi-pronged thing going on where you're working at Answer.ai, you're also doing an MIT, I guess, where your new side hustle used to be fast AI.
You've been working with Austin Huang, who is one of the absolute top AI practitioners in the world. He was a project leader at Google, you know, building the retrieval stuff for Google's deep learning models that became Gemini. He's the creator of Gemma.cpp. Yeah. What's it been like, you know, getting involved with working with folks like Austin, you know, what's been surprising about it or, you know, what kind of, what's the experience been like there?
Yeah. So I think at first I was definitely a little intimidated. The laundry list of cool things Austin has done is just, like, insane. But I realized, like, over time, like, Austin and, like, the rest of the crew at Answer.ai, they're very down to earth, very happy to, like, explain concepts, very happy to answer questions.
And I think that's, like, one of the things I've enjoyed the most about working with Austin. So for some context, we put together WebGPU puzzles. And so through that, I had to kind of, like, let's just take a look at that then, shall we? Yeah, sure. So here's GPU CPP.answer.ai.
Okay. Okay. So you and Austin built this together. Yes. And let's talk a bit about what this is. So this is like, let's go through a few here. These are some pretty hardcore things. Basically, what you're doing here, things like a 1D convolution, a prefix sum, you're asking people to write code, fill in something to, you know, what is a fairly complex thing written in hardcore, low-level GPU code?
Sorry. Kind of, yeah, hardcore, low-level GPU code, which, if it's taught at university at all, it would be, like, probably in, like, a master's program or something like that. It's just, like, it's extremely, extremely, extremely advanced. And you're also doing it in, like, a brand new framework that Austin invented.
So it kind of reminds me a bit of Ada Lovelace in some ways, like, you know, she was the first computer programmer. And she was programming a computer that had just been invented. I mean, how do you... Yeah, I mean, is that your background? You've got years of hardcore, CUDA, GPU, low-level background of programming.
Like, how did you implement and contribute to this project? No, I think my entire GPU programming background was probably, like, the three hours I spent solving Sasha Rush's GPU programming puzzles. And that was really fun for me. But in terms of just, like, getting this, putting this together, I think, like, learning on the fly again was, like, a huge thing for me.
And also just, like, knowing that the framework wasn't complete. And so if I had, like, any questions, that Austin would be more than happy to answer them. It's nice to have the guy that wrote the framework there to ask questions about... Exactly. Yeah. But another thing that I kind of just, like, reminded myself of was that...
So for a little context, Sasha Rush is, I believe, a professor at Cornell. He wrote these GPU puzzles. You can run them in CoLab. But the idea is basically to kind of distill down the ideas behind, you know, this sort of paradigm of, like, parallel GPU computing, and then have it presented in a very fun, interactive, sort of, like, puzzle-solving way.
And so through that, I kind of learned, like, hey, like, this is how you actually think in parallel, essentially. And so when I was implementing these, the web GPU version, I kind of reminded myself that, like, hey, even though this is, like, a new framework, things are a little hacky here and there, like, the essential idea is the same.
And for me, the goal for these puzzles was the same, kind of, along the lines of the experience I had with Sasha Rush's, kind of just distilling down those really core ideas into something that beginners could digest, anyone could digest, and really have, like, a fun time with. And so that's kind of, like, my, like, overarching philosophy when it came to these puzzles.
Yeah, I mean, I just think it's very... It is inspiring, though, right, because I hear so many people say, you can't expect to make any progress in a career in AI research or practice without a PhD. And, you know, you are so slow, Sarah, you still don't have a PhD, you know, my goodness.
And yet, you know, like, many, many, many fast AI alumni, I'm not saying fast AI is the unique way to do this, but it's a very common way to do this, you've forged a great path. And, you know, to be honest, I did encourage you to consider joining Answer AI full-time at least for a year, like, your strength, you know, your portfolio is strong enough that we're just about the hardest company in the world to get into, and we're offering you a position.
So it's, it's definitely works out, you know, I do want to ask, though, like, a lot of people judge on things other than your pure demonstrated competence. A lot of people will at least implicitly judge based on, you know, the fact that somebody is very young, or the fact that somebody is female.
And so I'm thinking, for example, my friend Tanishk, who finished high school when he was 10, and he wanted to go to university. And he, him and his family faced a lot of prejudice, you know, and struggled to get somebody to understand that he was ready to go to university.
And when he finally found a, you know, professor ready to take him on, they were right, he shone at university, and he went on to finish a really impressive PhD, you know, so I think he beat you to that one. Although you beat him to a New York's publication.
So tough, tough crowd. But, you know, it was a struggle. And, you know, I'm kind of curious to hear, like, it sounds like, maybe, I guess, in their case, his family were trying to get him to learn through a really classic, okay, stop going to school, start going to university.
By doing it kind of more on your own, online, like, has that mean there's been like less of a struggle for you? Or has there been times you've found it a bit challenging to get people to take you seriously based, you know, on your age or gender or anything else?
Well, I think the nice thing about having, you know, fast AI, answer AI, kind of my research as my side hustle, is that, you know, it's kind of less of, I guess, like, a part of my life as in comparison to maybe what Tanish did, like he graduated high school when he was 10, which is like an insane thing to do.
And I mean, really, the only option for him afterwards was like college. I feel like, generally, like my trajectory and kind of like my main hustle kind of route was very, more or less typical. And so not having not being like, I didn't really kind of face not being taken seriously or things like that, just because it kind of was more of a side hustle thing for me.
And, but to be fair, like if it were to be a main hustle, I guess I could see, definitely see how that might kind of suck. I think there are definitely people out there that are like programs out there, especially like the MIT Primes program that tries to kind of help out these like younger students kind of unlock that, I guess, potential.
Yeah, so-called like Vlad put in his time to invest in your success. Yeah, he was telling me, he thought like, at the beginning, when the directors of the program reached out to him and asked if you would help, he was like, this is going to be like a ginormous waste of time.
But me and the student that he mentored before me, both published papers. And so that was like, kind of eye opening, I think, like, I've heard around MIT, and just, in general, like, you do need to have like, or like, the word on the street is that you need to have a PhD, you need to have some sort of like higher level education in order to do these more like researchy and more like, I don't know, like interesting jobs.
But I think that that's sort of, it's a little odd to me, because... Well, I mean, like, I think you're getting a bit of a like, you're seeing how it is now, right? And what you're saying is true now. Maybe at some points, it was a little less true.
But like, right now, there are a few people in the world who have more experience with modern AI than you, with your whatever, five or six years, like, it's, it's on the higher end, you know, and for somebody who spent 20 years learning, I don't know, Lisp and Prologue and Bayesian statistics and whatever, they're probably going to take five or six years to unlearn that enough to be able to start where you were when you were 13.
So, like, for me, this is like a bit of a super power we have at Answer AI, is we basically totally ignore academic credentials and entirely focus on, like, portfolio, you know. And yeah, a lot of the folks that we end up wanting to work with are younger, you know, and often they never went to a fancy educational institution like MIT, because they were off forging their own thing.
So I think, yeah, I think it is like, I think your experience should become the new norm, it'll probably take decades to get there, you know. Yeah, make the side hustle the main hustle. Yeah. Yeah, I mean, I guess another thing about, you know, being a woman in tech in general, it helps to have people who, you know, can help mentor you and so forth.
So it's nice, like, we've got Audrey at Answer AI who was the founding president of Pi Ladies and probably knows more about how to deal with all that stuff than probably anybody in the world. So, yeah. Yeah, and I think, like, tying back to the MIT thing, I think another part of the reason why I wanted to come to MIT was there are just so many people interested in tech that are women.
And it's definitely hard to find anywhere else. We have to keep it 50/50 for, I guess, whatever's sake. So it's a good high concentration of, I think, like, women that are interested. No, I mean, it matters. Of course it matters. Absolutely. And it's important to end up somewhere where you're going to do your best work surrounded by people you can do your best work with.
Yeah. So coming back then to your research, what's it been like for you seeing, you know, O1 come out, this, you know, this renewed interest in kind of reasoning traces and reasoning combined with reinforcement learning? Yeah. Well, you know, how are you feeling about this research field that you got into a year or two ago?
And are you planning to keep pushing on that yourself? Or is it like, oh, it's too mainstream now, time to do something else? No, I think this is definitely very exciting for me. I think, like, hey, like, I chose the right path that people at OpenAI are doing it.
But I think, like, in general, there are, along with OpenAI that I actually don't know what's going on with the hood apart from, like, Twitter rumors. But there have been kind of, like, a bunch of papers, two, I believe there's one called, like, QuietStar, a few more along similar lines that kind of deal with the same problem.
And I think this is, like, one of the one of the bigger questions with large language models is that can we, like, because large language models, they, they kind of infer things, like, they have this sort of representation of language and therefore sort of, like, logic and knowledge. And somehow we need to, somehow they kind of put those things together in a way that is coherent.
But how do we actually, like, extract the things that we want, right? So I think that's gonna stay a big question, whether it's reasoning, whether it's truth, whether it's kind of, like, knowing, like, what things go with what things, like, I don't know if that was clear. But yeah, I can, like, rerun that, too, if you want to.
No, no, it's all good. Absolutely. Okay. Yeah, I mean, I mean, the reason I asked is I, I sometimes wonder that with myself, I'm like, I kind of like to poke at the areas that no one else is doing, you know, so like, if somebody is, sometimes if something, if something becomes super popular, like, okay, I'm bad, like, even with, like, ULM fit, you know, that was the first kind of real large language model, you know, language model kind of application of that kind.
And then suddenly everybody was doing it. And I kind of felt like, okay, maybe I don't need to be doing this now, because lots of people are doing it, like, there's something else I could, you know, uncover. It's, I guess that's a tricky thing with research is, like, do you want to keep uncovering in the same direction?
Or do you want to explore new directions? Are there other kind of directions of research you've been thinking, like, oh, maybe when you're done with this, you'd like to go in this other way? Well, I mean, just being an answer, I feel like I've been exposed to sort of a lot of the different types of like research.
I did like a few tangential tasks, just kind of like exploring the different projects that was that were going on. And one of the things that I haven't touched in a long time was kind of creating like an application, like a research buddy sort of type of thing. Um, and it's not as researchy as like my previous projects.
Um, but I think like being able to kind of create something with like an end user in mind is something else that I want to, um, definitely pursue during my time. That's kind of our thing, isn't it? You know, our thing at Answer.ai is all about research and development with an end user task and a specific end user in mind.
Exactly. And so kind of being able to still kind of experiment with different ideas, having that sort of researchy aspect, but also, I don't know, working towards a very tangible purpose, um, would be very cool. So, so before we wrap up, I guess like, um, anybody who's watched to this point at the interview, I'm thinking, you know, uh, are we thinking, well, I want to be more like Sarah, you know, I think you're a really inspiring role model for people.
Um, and a lot of people, uh, where you were four or five years ago, you know, they're, they're just starting out probably feeling pretty intimidated, um, pretty overwhelmed and thinking like, well, I can't do this. I need a PhD or, you know, whatever academic who's a family member or something like, I guess like, what, what would your advice be to 14 year old Sarah?
You know, if she was feeling, there must have been times you were feeling like, ugh, I can't do this, or I don't want to do this, or is this worth doing, or like, am I too weird, nobody else is doing this? Like, what, what kind of advice or feedback or thoughts would you pass along to that, to that Sarah?
I would say kind of just to know what you're curious in and know what you're interested in and just go for it, kind of full send. Um, I'm glad you said that, so like, it's, it's about like, you, you, you actually have, you actually have to care and enjoy it.
Like it's, if you treat it as a grind, you're probably not going to do it, right? Exactly. Like AI is probably not going to be everyone's cup of tea. Um, but I was fortunate enough to have discovered it quite early on. And I just knew that I was very interested in it.
And obviously there were times where, like you said, like things got hard. I wanted to like drop everything. Yeah. So sometimes you've got to grind it out. You've got to, you've got to grind it out. But I think like, know the reason why you were interested in the first place.
I think if you're interested in anything, um, there's got to be something very genuine, something very, I don't know, compelling about it. Um, so just remembering back to the first time, um, you were interested in it. Um, and kind of just knowing that like, there is an end, um, goal in mind that you'll probably reach if you keep at it.
Well, I'm definitely gonna share this story with my nine-year-old daughter who loves coding and she loves math and she loves calculus. And, uh, I think she'll find this very inspiring. Uh, and I hope that other kids and adults do as well, but I know you're, uh, definitely an inspiration to me, Sarah.
So thank you so much for this time and for being involved. Sarah. Thank you so much for having me again. Okay. Bye. Bye.