back to index

How students build with Claude | Code w/ Claude


Whisper Transcript | Transcript Only Page

00:00:00.000 | All right. Hey, everybody. How are you doing? My name is Greg. I lead student outreach here at
00:00:11.520 | Anthropic, and I am so excited to be sharing the stage with some of the brightest young minds in AI.
00:00:16.440 | Just a little context for this panel. So, at Anthropic, we've given out API credits to thousands
00:00:23.680 | of students to help them build things at school, and so what you're about to see is a very small
00:00:29.160 | glimpse at what students have been creating with those API credits. It's a very wide variety of
00:00:35.220 | things, as you're about to notice. Some of these projects are very humorous and funny. Some of these
00:00:40.400 | projects are very serious and important. Some of these students are working on one project, and some
00:00:45.760 | of these students have been building an app every single week throughout all of 2025. So, I think if
00:00:52.480 | I was gonna sum up what I've learned from running this program, it's that the future is really, really
00:00:57.360 | bright in the hands of these students. So, without further ado, I'm gonna invite up our first speaker,
00:01:01.920 | Isabel from Stanford.
00:01:03.420 | All right. Thank you for having me. It's a privilege to be here. My name is Isabel. I'm a senior at Stanford,
00:01:15.600 | where I study aeronautics and astronautics, and I'm doing my honors in international security. And today,
00:01:20.740 | I'm here to talk to you about my honors work, which is on finding nuclear weapons in outer space, and how I use
00:01:26.220 | Claude to help me do it. So, for those of you that may not know, Article 4 of the Outer Space Treaty bans
00:01:34.780 | the placement of nuclear weapons in outer space. Now, other arms control agreements that you may have
00:01:39.900 | heard of, like START and New START, include provisions for verification and monitoring. So,
00:01:44.780 | nations are shown to be compliant with their treaty obligations using inspection systems. We have on-site
00:01:50.780 | inspections where inspectors will go and look at each other's delivery vehicles and inspect for the
00:01:55.740 | presence of nuclear warheads. We don't have anything like that for outer space.
00:01:59.340 | mostly because we signed the outer space treaty in 1967, and there were no technologies to do that
00:02:04.220 | kind of inspection, right? How would you go about approaching a satellite in orbit that might be
00:02:10.060 | carrying a nuclear weapon and inspecting it for the presence of such a device? Daunting for the 1960s,
00:02:15.420 | daunting today. And this became a problem recently, in 2024, in April of last year.
00:02:24.860 | The Biden administration announced that the United States assesses that Russia is developing a space
00:02:30.220 | vehicle that carries a nuclear weapon. Now, this was pretty destabilizing for the international
00:02:35.500 | community. We've had a lot of dispute in the UN Security Council recently about how to handle this
00:02:40.780 | potential violation of the Outer Space Treaty. Given that we don't have a verification mechanism for
00:02:45.980 | compliance with the Outer Space Treaty, I started to wonder if it would be possible to implement such a
00:02:50.220 | system. Particularly given that the US Space Force tracks 44,800 space objects today, how would you
00:02:56.380 | begin to know which one of those is the suspected nuclear weapon? So this brings me to my research
00:03:04.220 | question. Is it feasible to perform an in-space inspection mission where you inspect a target
00:03:10.060 | satellite for the presence of a nuclear warhead on board? Daunting question. It has a lot of interesting
00:03:15.740 | technical and political facets to it, but for one particular aspect of it, I was able to use Claude
00:03:21.420 | to my advantage. So I looked at specifically the feasibility of detecting the nuclear weapon with an
00:03:27.900 | x-ray system. So you fly an x-ray source and detector on two different inspector satellites in space,
00:03:36.780 | have them rendezvous with the suspected nuclear warhead target, and scan it for the presence of a nuclear
00:03:42.060 | weapon and board. I wanted to know if this would ever be possible. No one's ever tried using x-rays
00:03:47.260 | in space. There are interesting questions around whether the space background environment is - there's too
00:03:51.820 | much noise in space to detect the source signal. So I built a computational simulation to see if this
00:03:58.380 | would ever be possible. And to do it, I used Claude. I used this very complicated CERN software package
00:04:05.740 | called giant4. I am not a particle physicist. I did not know how to approach this software package
00:04:11.420 | and write the C++ code. But I was able to make a desktop application to do my simulation using Claude.
00:04:17.260 | And it was incredibly exciting. It worked. So what you're seeing in this picture is like a very, very quick
00:04:25.580 | snapshot of an x-ray image taken in space. And you see a little hole in the middle that shows you that
00:04:33.740 | there's very, very dense fissile material on board the target of the scan. So indeed, in this simulation,
00:04:39.340 | there was a simulated nuclear warhead on board the satellite target. The outcomes for this are pretty
00:04:45.660 | significant and interesting. There are a lot of people in the national security intelligence community in this
00:04:50.540 | country that are interested in developing this kind of capability to inspect adversary spacecraft on orbit.
00:04:55.580 | To understand their capabilities, particularly whether they might carry a weapon of mass destruction.
00:05:01.100 | So having done this research, I actually am going to be able to brief it in Washington, D.C. to some
00:05:06.700 | policy makers at the Pentagon and state. I'm really thrilled about that opportunity. And certainly,
00:05:12.540 | the desktop application with this level of fidelity would not have been possible without
00:05:16.540 | modern AI tools to make this kind of research accessible to an undergrad in less than a year.
00:05:22.060 | My takeaways for you, kind of as a student, doing research in the era of AI is just that primarily,
00:05:32.060 | there is no learning curve that is too steep any longer, right? Even the toughest problems, space
00:05:38.220 | technology is notoriously hard, nuclear weapons, existential threats. We can address these critical crises
00:05:45.180 | with the tools that we have today with emerging technology. And so I want to challenge all of the
00:05:50.140 | minds here and other students to think about what are the world's toughest problems? What are the problems
00:05:54.460 | that you thought were unaddressable, that feel like existential crises to you for the next generation?
00:06:01.980 | Those are the ones that we should be using our brand new, shiny, exciting AI assistants to work on,
00:06:08.860 | because that's how we're going to help make the world safer and more secure, or at least outer space,
00:06:13.580 | more secure. So thank you.
00:06:15.660 | I'm going to pass it off to the next presenter now, but if you have any questions, I'd love to talk after
00:06:25.820 | the presentation.
00:06:30.380 | Okay, so it's kind of tough to follow up finding nuclear objects in space. So I'm going to tell you about
00:06:37.660 | how I did not know the difference between the terminal and the code editor, and why Claude is
00:06:42.060 | the reason why I was able to learn how to code. I'm a student at UC Berkeley, my name is Mason Ardidi, and I'll go ahead
00:06:47.260 | and get started. So I want to talk about what we think of as the traditional way to approach learning
00:06:53.180 | how to code. I'm going to call this the bottom-up way, where we start by taking basic classes, learn our basic
00:06:58.780 | skills, and then build apps with those skills. Slowly but surely, we level up our skill set and build apps that are
00:07:04.220 | more complicated. I learned a little bit differently. I'm going to call this the top-down approach, where I had an
00:07:13.340 | idea as I get inspired randomly, and I had no idea how to solve it. It was software I've never coded before.
00:07:19.420 | So I tried to have AI make it for me. Hey, make this app for me. And then when it inevitably fails, I learn
00:07:25.260 | how to do it myself, slowly but surely learning through different layers of abstraction until I actually
00:07:30.780 | understand what's going on. Now, where did this leave me seven months ago? It left me not knowing what
00:07:36.540 | the difference between the terminal and the code editor was. I put npx create next app latest in my page
00:07:43.180 | file. I had no idea what I was doing. But slowly but surely, I asked, why is this happening? What am I
00:07:48.380 | doing wrong? And I was able to learn more complicated skills. Let me show you a demo of something I'm
00:07:54.540 | capable of doing now. Okay, welcome to CalGBT, which is a better way to schedule your Cal courses using AI.
00:08:00.940 | It's going to question like, show me math classes with a high average grade, since I want to be lazy in my
00:08:07.580 | math and get an easy curve. Here, it's going to show us five different classes that have an average of A or more,
00:08:13.020 | And in fact, it even showed us classes with a grade point average of 4.0. Can't really get much better
00:08:17.660 | than that. Now, let's say it's getting late in the enrollment cycle. And I want to see classes that
00:08:22.140 | still have open seats. Show me history classes that still have open seats. And this is drawing directly
00:08:30.220 | from Berkeley time. So it's live data. And here it is, it's showing you history classes, five seats,
00:08:35.260 | 20 seats, or three seats. We can even ask questions that are more deep, like, what is the meaning of life?
00:08:42.060 | And do with that answer as you will. But this is CalGBT. My name is Mason. And enjoy your day.
00:08:50.460 | I'll show you another one, which I developed at the Pear and Anthropic hackathon as well.
00:08:55.900 | Okay, welcome to Get Ready, which is a new way to visualize and understand new code bases.
00:09:03.900 | Let's take a look at Anthropic's SDK for TypeScript, for example.
00:09:08.220 | You'll see soon. And we'll be able to interact with the chart and see
00:09:13.500 | how all of these files interact with each other. So here we have a mapping of some of the most
00:09:19.660 | important files. We chose not to display all of them, just the most important ones that the users
00:09:24.220 | will interact with the most. And we have these lines to show how they're interconnected. And we do this
00:09:29.580 | through the function calls that are actually, like, in each file. So, like, if this demo TypeScript file
00:09:34.300 | is referencing the batch results, that's where the line comes in. And then over here, we have just a
00:09:39.980 | quick description on what the file actually does. And we have our comments on the code base.
00:09:45.340 | Okay, and on top of these two, I built many projects over the course of my learning how to code.
00:09:52.860 | Now, what is the point of me showing you all of this? I'm not here to brag. I'm here to say that
00:09:57.900 | Claude is the reason why I was able to learn how to code. Without Claude, without these AI tools,
00:10:03.500 | including cursor, windsurf, whatever you guys want to use, none of this would have been possible. And the key
00:10:07.980 | takeaway for me is that you can build anything you want nowadays. You just have to ask the right
00:10:13.100 | questions, learn through the different layers of abstraction.
00:10:15.900 | I think this is representative of a new style of building and a new class of builders.
00:10:22.300 | Where my flow, personally, is I find a problem that I'm inspired by and want to fix.
00:10:27.660 | I realize the solution is something that I have no idea how to do.
00:10:31.900 | And then I have a high level chat with Claude, execute steps in the actual editor,
00:10:36.140 | and then record and post a demo when it's not perfect, hopefully bringing users and revenue later
00:10:41.660 | on. But this iteration cycle, instead of taking years for an undergraduate degree or doing other things,
00:10:47.420 | can be one day to one week maximum if you really want to.
00:10:51.500 | So I'll keep it short and sweet and leave you guys with a couple of things to think about,
00:10:55.420 | which are on my mind right now. Which is, how can we build to get users and revenue?
00:11:00.300 | Not for technical perfection and impressiveness. How can I build things as fast and as simply as
00:11:06.940 | possible? As demonstrated by this prompt, give it to me in the simplest and most concise way possible.
00:11:12.220 | What ideas actually inspire you? And how can we build it today? And lastly, not on the slide,
00:11:20.220 | but what does it mean to really know how to code? Does it mean understanding every single line and
00:11:25.820 | every single function? Or does it mean being able to build something that actually improves people's
00:11:29.980 | lives? I'm going to continue to post more information. If you want to connect with me,
00:11:34.620 | you can scan this QR code. But my name is Mason. Thank you guys.
00:11:38.620 | All right. What is up everyone? How are we all doing? We good? Yeah. My name is Rohil. I'm a freshman,
00:11:54.860 | or just finished freshman year at UC Berkeley in the MET program, studying EKS and business. So CS and
00:12:00.140 | business. And I'm here to talk to you guys today about SideQuest, which is a project that a couple of
00:12:05.980 | friends and I made at the Pear Exanthropic Hackathon recently. So let me tell you guys about a big
00:12:13.500 | problem today, is AI embodiment. So we see like in Hacker News and the newest news all around that
00:12:21.420 | we're trying to create bots that interact with our world. And most recently, we have seen these robot
00:12:26.540 | dogs that are able to deliver you a water cup or something like that. But these systems do not compete
00:12:34.460 | with humans ourselves. Humans are like built to interact with our world. And that brings me
00:12:41.340 | to here, which is that today we have humans hiring AI agents to do their work for them. I'm sure all of
00:12:50.860 | you guys have probably employed some sort of AI agent to do your work for you. But today with SideQuest,
00:12:57.740 | we are flipping the script, we are flipping the script. And we have AI agents hiring humans to do their work for
00:13:04.620 | them. So AI agents obviously are amazing at interacting with the digital world. And humans are amazing at
00:13:13.500 | interacting with the physical world. So why can't these AI agents just hire the humans?
00:13:19.180 | So that brings me to the architecture of SideQuest, which is basically, like, let me give you a
00:13:26.620 | hypothetical example. Let's say an AI agent is trying to host a hackathon. So now they have all the logistics
00:13:34.060 | covered, but they need to put some advertising material up. They need some flyers up so that
00:13:39.660 | people can find out where this hackathon is, where to go. But they don't have any physical means to do that.
00:13:44.620 | So what they do is that they ping the nearest human to that area and tell them, "Oh, pick up this flyer,
00:13:51.260 | put it in this location, and livestream that video to me. And as soon as I can see that you did it,
00:13:57.580 | then I'll give you money." So that's exactly what's happening in SideQuest, and I'll show you a short demo.
00:14:09.580 | Hello, world. My AI friends and I are hosting a hackathon. Let's check if the flyers are up.
00:14:14.540 | So we see a flyer here. Flyer detected.
00:14:17.500 | But we do not see a flyer here. No flyer detected.
00:14:21.500 | Bruh, I need a human to put on some flyers in room two. Let's do this.
00:14:35.740 | It looks like there's a quest. So I have to collect three posters from table eight. Let's do it.
00:14:41.900 | So over here, there's a live video stream that Claude is actively looking at and verifying whether
00:14:48.380 | you're doing a new task. I found table eight. Let's see the posters.
00:14:57.980 | Boom. Scanned. It says I have to set them up in Strong Yes.
00:15:01.100 | We're here at Strong Yes. Now, let's set up the poster. And perfect. I think that should be good.
00:15:13.660 | Let's scan it. Booyah. I made a hundred bucks. Let's go. And boom, we're done. We're ready for the hackathon.
00:15:24.700 | Yep. And that's SideQuest. So let me talk a little bit about what I learned with building with Claude.
00:15:31.260 | Is that first, Claude is really smart, as with like any of these AI systems these days. And they can
00:15:37.580 | reason through many messy edge cases. So we as humans, we don't need to prompt every little nitty-gritty
00:15:44.140 | thing. We can start thinking about bigger picture parts of building products. Secondly, we should design
00:15:52.780 | like with a back and forth workflow with these AI systems. Like originally, we are thinking like
00:15:58.620 | upfront, oh, how should I build this whole big thing? But that's a really big task. You can break it down.
00:16:04.620 | Ask Claude, oh, like what are the different things that I need to do to to work on something? And let's
00:16:09.820 | build this step by step. So with this iterative process, you can build like very robust systems.
00:16:17.580 | So bottom line is that you should trust AI and trust Claude that they aren't things that you
00:16:24.140 | have to micromanage. They can think on their own as well. And now some takeaways for builders to be like
00:16:30.860 | like this cool guy, not this grumpy guy, is that you should think of AI as a system rather than just a
00:16:39.420 | feature builder. That this is someone that you can like talk to, reason with. And secondly, as like thinking
00:16:46.540 | bigger picture about us as humans, is that we should be system designers first, or architects of like the
00:16:55.660 | things that we're building. Because in the future, we aren't going to be the ones writing the small code.
00:17:00.860 | We'll be the ones dictating what code to write. So that brings me to the end. Thank you guys so much.
00:17:07.500 | Have a great day. Bye bye.
00:17:09.500 | All right. Good afternoon, everyone. I'm Daniel. I study computer science at USC, and I've also built
00:17:19.100 | projects across Amazon, IBM, and various startups. Yeah, very honored to be here as a student speaker
00:17:25.260 | today. For more context, I help USC lead some of the entrepreneurship programs. And over the past year,
00:17:31.820 | Claude has been integral to many of our projects, powering innovative solutions across various domains.
00:17:36.780 | When Anthropic announced the hackathon at USC, a lot of the students, including my teammates Vishnu,
00:17:42.540 | Shabayan, and myself, were naturally very eager to join in and explore new directions with Claude.
00:17:48.220 | Today, I'm honored to share our journey and insights with you. So let's first start by looking at the
00:17:53.180 | problem. Current LLMs are great at giving answers, but when decisions really matter, one general response
00:17:59.580 | just isn't enough most of the time. Whether it's business, healthcare, or policy, high-stakes decisions
00:18:06.380 | require diverse input and deep analysis. Today, getting those perspectives means prompting an LLM multiple times,
00:18:14.460 | which could be slow, inconsistent, and very manual. Knowing that Claude excels at complex reasoning
00:18:20.940 | as one of its most impressive capabilities, that's the gap that we aim to solve for our hackathon.
00:18:25.740 | Introducing Claude Cortex, a system designed to emulate a panel of experts,
00:18:31.420 | each analyzing the problem from a different angle. It dynamically creates specialized agents tailored to
00:18:37.100 | your problem context and enables parallel processing for diverse insights. The output here is a more
00:18:43.580 | synthesized and well-rounded recommendation, enhancing output quality for decision making.
00:18:48.060 | It's basically like having your own strategy team for each prompt.
00:18:53.740 | So yeah, let me show you how it works with a really simple example to test out the agents. So let's say I
00:18:59.100 | want to learn how to use Lengraph specifically by researching its documentation. I also want to share
00:19:04.060 | that finding with my teammates. I would type that in as a single prompt and let the master agent interpret
00:19:09.180 | that request and spin it different agents, which in this case will need a browser agent to search and
00:19:14.060 | extract relevant information from Lengraph's documentation, a research agent to summarize the key
00:19:19.500 | concepts in plain language, as well as a notes agent to generate clear explanations, which it then shares
00:19:26.860 | with my teammates automatically. Each agent will work independently, but they can communicate with one
00:19:31.980 | another, creating a multi-agent system that gives more comprehensive insights.
00:19:35.820 | Now for sectors where data security and compliance are paramount, Claude Cortex offers a secured mode by
00:19:43.740 | integrating with AWS Bedrock. It ensures that all operations meet privacy standards, making it ideal
00:19:49.580 | for sensitive environments. The rest of our architecture is also very straightforward. The front
00:19:54.940 | end was built with Next and Tailwind. The back end leverages Fast API and Lengraph for orchestrating
00:20:00.060 | multi-agent workflows. And Claude, of course, powers our agents reasoning with the addition of browser use,
00:20:05.740 | which allows agents to fetch real-time web data and enhance their analytical capabilities. Claude Cortex represents a
00:20:13.420 | shift in the way we use language models, moving away from simply generating responses to
00:20:18.700 | structuring parallel reasoning pathways and delivering more comprehensive insights.
00:20:22.700 | It's versatile, making it valuable across various sectors, from corporate strategy to public health safety.
00:20:28.620 | Now, the key takeaways from building Claude Cortex are very intuitive, but the main two points here that I
00:20:35.980 | want to emphasize are that when agent outputs were more focused and well-structured, like JSON format, Claude's synthesis
00:20:43.100 | became more nuanced and high quality. It struggled, however, when upstream agents were more vague and
00:20:50.220 | just dumped text blobs into the stream. And then dynamic task creation allows for flexibility. What that
00:20:57.740 | means is we first started off by creating five predefined agents for every scenario. However, we later
00:21:04.780 | realized that having a master agent to decide what tasks and agents to create allowed for more accurate and relevant information.
00:21:12.780 | What we're building with Claude Cortex sits at a broader trend. Claude is powering a large number of student-led
00:21:18.060 | products at USC. We've seen tools for lawyers to process case files faster, apps that help people retain
00:21:24.220 | and connect knowledge more effectively, and software that can automate documentation and progress updates.
00:21:29.180 | Claude's ability to read deeply, summarize clearly, and follow structure is what makes all of this possible.
00:21:36.620 | Looking ahead, as a student building with Claude, the most powerful applications I've seen aren't just
00:21:41.900 | asking Claude for answers. They're using it as infrastructure, something that you can wire into
00:21:46.380 | your workflows and something that you can orchestrate like a system. And that's the shift that we see as
00:21:51.020 | well. We imagine agents that can collaborate with one another, tools that can reflect, and context that can
00:21:57.740 | compound. In summary, Claude Cortex isn't another AI tool. It's a leap towards a more intelligent,
00:22:03.900 | secure, and multi-dimensional decision-making process. As we continue to refine and expand its capabilities,
00:22:10.380 | we invite you to explore its potential and join us in shaping the future of AI-driven solutions.
00:22:16.300 | Here's the team behind Claude Cortex. We're all student builders and we're all, yeah, builders and student
00:22:21.340 | leaders at USC, and we would love to discuss more, so please feel free to reach out to us whenever.
00:22:25.420 | I'm Daniel Gao, and it's been a pleasure sharing our work with you. Thank you for your time and attention today.
00:22:31.020 | Thank you.
00:22:37.580 | Thank you.