Back to Index

To the moon! Navigating deep context in legacy code with Augment Agent — Forrest Brazeal, Matt Ball


Transcript

. Welcome, everyone. Thank you so much for coming. My name is Forrest. This is Matt. And we're going to be talking to you today about Augment Agent and specifically legacy code, how we get the most out of gnarly legacy code bases using an AI agent. So I do not work for Augment Code.

I am a friend and partner of Augment Code. So I hope to put this talk together. Matt is from Augment Code. So he's going to be your best person to come to with your most detailed technical questions after the session. Matt, anything you want to say about yourself? Yeah, I was once a software engineer and then got into the developer tool space.

I was at Postman for a number of years and then got really excited by the AI boom. So I've been at Augment for the last two years or so. I'm a solutions architect. Fantastic. All right, so just a quick roadmap of where we're going. We're going to talk a little bit about Augment Agent.

Uh-oh, do we not have them? I see someone pointing. We good now? Okay, fantastic. So we've got Augment Agent. We're going to talk a little bit about the Apollo 11 guidance computer and then we're going to show you some really interesting things you can do with that code base and Augment Agent.

So Augment is the product we're going to be using today. You do not need your laptop out to follow along with this session. We will do it all in front of your eyes. But if you want, you can sign up. There's a free trial at AugmentCode.com. And Matt will tell you a little bit about the most important features of Augment Code.

Yeah. Number one is our context engine. So we really identified early on that in order to get high-quality outputs from any of the models, they require high-quality input. Doing that for code bases is not necessarily straightforward. It's not just like text. So this is something that we've seen in the market.

A lot of folks think that there's a way to do this. We spent two years building a proprietary system that can lift the right knowledge from code base and pass it over to the models. And we think that results in high-quality outputs and more accurate understanding. We're a plug-in to existing IDEs.

We see in a lot of enterprise that if you've got a 10-year-old Java code base, you're using IntelliJ, you don't want to have to switch. And then from day zero, because we were targeting enterprise, we've focused a lot of security capabilities. So we have things that are unique to us versus any other vendor, like customer-managed encryption keys.

We have ISO 42001, which is a new AI ISO standard. So we really wanted to laser in on providing security over what is a very important asset in your code base. Awesome. And then we've got a couple of tools we'll be referring to today. mainly the chat and agent modes of augment code.

And when would you use each of those, Matt? Yeah, chat is great for kind of simple back-and-forth question-and-answer or when you want to leverage a little bit more control over the models and you just want to go one turn at a time. Obviously, we see that agents can integrate into other tools, are much more capable of larger, more complex tasks.

And then remote agents are great when you actually want to close the lid on your machine and allow them to continue to run in the background and in parallel. Yes, we did a workshop on Tuesday that involved a contest component. And I had someone come up to me afterwards saying, I don't understand how the winners were so fast at submitting their solutions because we had eight or nine problems we were doing.

And it's possible some of them were using remote agent to do some of those tasks in parallel. So it can come in handy for that as well. All right. So we're going to be talking today about the Apollo 11 mission in 1969. I think most of us are familiar with this, the first time humans landed on the moon.

There were three astronauts in that capsule, Neil Armstrong, Buzz Aldrin, and Michael Collins. But sometimes people say there was a fourth astronaut on the Apollo 11 mission. Does anyone know what that astronaut was? Yeah, the Apollo guidance computer or the AGC. And the main things to know about the AGC.

So this is used to land the lunar module. If you've ever seen that picture of Margaret Hamilton with like that giant stack of papers with the code on it, that was the source code for the Apollo guidance computer. It was one giant monolithic source file developed over a period of several years at MIT in the 1960s.

It was written in assembly language. Who here has written assembly at some point in their lives? Put your hand down if it was in college. All right. So keep your hand up if it was in production in some capacity. Yeah, exactly. That's what I thought. All right. So not a lot of us are going to be able to look at assembly code and be able to tell you confidently what it does on first glance.

I certainly can. I haven't written much assembly since I was in computer systems 201. All right. So in some ways, this is the ultimate legacy code base. It is on GitHub. You can check it out yourself if you want. It's easy to find. We are going to try to navigate it well enough to see if we can understand it and land the lunar module ourselves.

And Augment Agent is going to help us do that. So the first thing you want to do anytime you pop open a large legacy code base is you want to see if you can get your head around it, right? Someone put a lot of their time and mental energy into putting this code base together, and you've got to catch up with them.

That historically has been a long process. It's been a fraught process. It's been an imprecise process. And AI agents such as Augment Code can help a lot with that. So let's look at a specific example, and I'm choosing the 1201 and 1202 program alarms. If you're an Apollo 11 history buff, you know about these because during the final descent onto the surface of the moon, the astronauts called to Houston and said, "Hey, we're getting this alarm.

It's a 1202. Give us a reading on the 1202 program alarm." This alarm went off five times as the lunar module, the Eagle, was making its descent onto the lunar surface. And there was a question for a moment about whether the landing would need to be aborted. And finally, Houston said, "No, you're okay.

You can continue." So our question is, could we determine that for ourselves, right? If you were given the Apollo guidance computer source code, would you be as confident as Houston was to say, "No, this is not a problem. It's going to affect the success or stability of the mission." So the way I would do this is I'm going to pop open.

I've got VS code up here. And I've got a slightly modified version of the Apollo 11 code base just to orient you. It looks like this, all right? This is their version of assembly code. It's truly just a bunch of two-word commands saying, "Move this piece of data into this register." So it's not fantastically readable.

And even if you did like a grep and looked for 1201 or 1202, it's not going to tell you what the value of those alarms actually is. So I can go to augment chat and I can say, "What does the 1202 program alarm do?" And let's see what it tells me.

Generating our response. All right. And so it's going to take a minute to turn on that. And it's going to come back to me with a number of pieces of information, both from the code base itself. And it will also be able to call out to the web and it'll be able to look there.

One of the things I think is interesting about using a chat mode like this is it's kind of similar to calling back to mission control in Houston. And if you were to plug in like, you know, some MCP servers to this, right? There's a, it gives you an even, an even larger mission control element to work with.

Okay. So it's, it's actually reading this file for us. So the ACG executive scheduler system runs out of available core sets. All right. And it shows us where that happens. Look, there's octal value 1202 right there. It explains the number of registers there are. It explains the, yeah, along with 1201 no vac areas.

So it pulled both of these and look, it went out to the web and it's giving us the context of what actually caused the error. It was an external radar system being left on. And the Apollo 11 computer was smart enough then to offload some low priority tasks so that it could continue executing mission control, mission critical components inside the computer.

That's why they were able to land. Uh, so yeah. And then there's your little TLDR. 1202 means the computer's trying to do two things at once and has run out of memory to track all the concurrent jobs. How long would it have taken you to figure that out on your own reading through that code base, right?

I can tell you would have taken me more than the 30 seconds we've been talking here. So that's pretty cool to see. All right. The second thing I want to do is do a little bit of code modernization here. Now, when the lunar module actually landed on the surface, Neil Armstrong used a routine in the lunar landing guidance equations file called P65.

And that was kind of like a manual override mode. They didn't really trust the computer enough to let it land the Eagle all by itself. Neil Armstrong kind of had a thumb on the controls throughout that process. There is also a routine in that file, as I discovered, called P66.

Sorry, he used P66. We're going to use P65. P65 is full automatic. That's where you just let the lunar module land by itself. It turns out that augment agent has a thing called auto mode. Yes? Yes. I'm going to show you what that looks like. So you see down on my chat here, I actually was on agent mode before.

So we could have been in chat mode, but I can also be in agent mode. And this is going to, if I ask it to write some code for me or to run some code, it's not going to ask me for permission. It's not going to wait and have me click on something.

It's just going to go do it. I could go get a, you know, a cup of coffee and I could let augment agent land on the moon for me. So what I'm going to do, I've added a couple of files to this code base. One is called simulator.py. I wrote a little simulator that expects a file to exist called descent.

And it expects a class to exist in that file called lunar descent guidance that implements the P65 algorithm. That file is not written. I'm going to see if augment agent can write that file, run the simulator, and land on the moon without any human assistance whatsoever. Just like the actual P65 would have run.

Should we give it a try? Yes. All right. So this is my favorite prompt to do this. It's just run simulator.py until it succeeds. Nice. Do not change simulator.py. All right. That's where it would try to get me, right? Okay. Let's see what happens. All right. So what's augment agent doing here?

All right. It's going to run the simulator.py file and work through any issues. So, Matt, any comments you want to make about like how augment builds its plan for what it wants to run here? Yeah. So it's going to use code-based knowledge as a starting point to figure out, you know, how it should, what tools it should run, what commands might be important.

And so from there it can start to understand what might be missing. And then typically you'll see the agent start to put together the framework of a plan as to how to solve this. So it might go and read individual files and just start to gather a better sense of what it's going to take to complete the task.

So here it's realizing at this point that the file is missing and then it's going to start exploring the code base more generally with our context engine. So it can kind of grok some of that assembly code and start to think about what an implementation would look like. That's right and it's actually like it's running little terminal commands for us, right?

And it's not waiting for us to intervene here. It's just doing it. All right. So it's mapped things out. Yes. All right. So there's a little test file in here. Oh, it already succeeded. Okay. It got ahead of me. So based on the test files, I can understand what the lunar descent guidance class should look like.

So it created the missing descent.py file. Let's take a look at that and see. So there you go. So it wrote descent.py. It's an implementation of the P65 vertical descent guidance. So Augment has indexed this entire code base. It's able to go look at it. So it found, you know, where it is in the original source code.

And then I got to tell you, it's easier for me to read Python than it is to read assembly code, right? But it's actually laying this out for us. So here's where the position and velocity are. It's got the desired velocity. It's got the gravity fields. And then it pulls the routine together.

And it should actually have an algorithm in here called P65 guidance that implements this in Python. And so our final position as it ran this here. Let me go. Let me catch up with my chat. Yeah, and it ran the simulator. Final position, this number of meters, final vertical and horizontal velocity.

And you can see that it landed, you know, within our zero to one meters that we would expect for our final simulation values. So, yay Augment Agent. Good job. It landed on the moon for us. Okay. Most of us are not landing on the moon as part of our day job.

So is this a parlor trick? Is there something we can actually learn here? So this is what I would take back to a legacy code base of mine based on what we've seen today. There's three basic steps. Number one, anytime you pull this open, you're going to try to use the agent to help you understand on your own what's going on in that code base.

So Augment Agent will index it for you. You can ask questions in chat. You're going to be asking questions like, what does this code base actually do, right? What is this piece of the code? What does file XYZ do for me? Are there style or convention things going on here that I should be aware of?

And then you can start writing some tests. So sometimes people ask, like, you know, if I am not sure I fully understand this code, am I going to know if the new code that I create is actually correct? So I think a blend of agent and chat mode are going to help you out a lot here.

Use chat mode to predict what you think the code should do. So in the case of the P65 algorithm, you know, tell me what I think should happen when I run this lunar descent given these input parameters. And then have the agent mode write some tests to check that behavior.

It's not always going to get it right. I had a case where it was working with the scheduler and it couldn't decide for itself whether priority level zero was the highest level or lowest level in the system. And you could convince it either way, like it's a classic thing an agent would struggle with.

So take the time to go, you know, read yourself, right? Have it surface those pieces of the code for you. And then, you know, you're an engineer. You can apply your human ingenuity to it. And then modernize. Start converting small portions of the code in modular fashion to the desired language or, you know, style that you want.

And apply those same tests to make sure that you're not getting far afield from what the actual functionality of the code should be. And that is not just a strategy for success landing the Apollo 11 guidance computer on the moon, but for working with your code as well. All right.

Once again, you can try Augment Agent yourself for free at augmentcode.com. Bring your gnarliest legacy code bases and refactoring projects. I forget how many lines of code are in the Apollo guidance computer repo. It's thousands. I mean, that's reasonable for this, right? Yeah, that's pretty small fry when it comes to, like, enterprise projects these days.

So, you know, the real world equivalent of this kind of thing might be, like, I've got a Java 8 project. I want to go to Java 17. That's really tedious, toilsome. How do I pass some of that over to AI? Yep. That's the, like, real world equivalent. That's right.

And as you can see, it's pretty fast, too. And you can bring your Greenfield projects. There's no shame in that. But the legacy ones, I think, are a little bit more fun. That's all we've got. Yes, sir? What about things like checking for dependencies across, like, a large code base?

It loves to do that. Yeah. And it can actually go back in. Yes, it can. And you actually, in 20 seconds, you actually saw a little bit of that here, right? Where it went through and, like, there were some missing files. It tried to run Python and said, oops, it's not installed.

I'm going to use Python 3 instead, right? It's happy to do that. In 12 seconds. Go ahead, Drew. What you showed was test-driven development effectively. Yes. Could this improve the needle on whether that's a recommended practice or...? Yeah. Agents love to reason and iterate. So any way they can check their own result is very helpful.

So tests are good for that. That's a great question. We are out of time. Thanks, everyone. We appreciate you. We'll be right back.