Back to Index

Vibes won't cut it — Chris Kelly, Augment Code


Transcript

Thanks for coming by the way and for sticking around for a little while. If you aren't prepared, I hate to break it to you, but this time next year half of us won't even be here anymore. That's basically if you listen to whatever the hype is about AI and AI coding.

You know, there's lots of fanfare, no disrespect to very intelligent people that made these quotes. But I think they're probably wrong, not because I don't think AI coding is going somewhere important, but probably because they haven't actually touched a production system in a very, very long time. And so maybe generating code at 30% isn't really what they think it is.

Because really AI code is still code. One thing they don't really recognize in this space is that they're working in very, very, very large code bases that have basically every decision that's ever needed to be made about that code, about that architecture, about that infrastructure has already been made for them.

So if I'm generating 30% of my code against, you know, let's say they're doing thousands of lines a day against millions of lines of new code, of existing code, there's not a lot of wiggle room for what that code can do or should do. Right? Like it's just another button.

If you've ever, no offense, anyone from meta here before I start ripping on meta? No, no meta. Great. I'll rip on meta for a minute. If you've ever talked to a meta engineer, they will talk about how they built a button in the ads platform for six months. Like that's what they worked on for six months.

That was their job. So like there's very little definition about like what this can, needs to do that has wiggle room that AI really struck, can influence. Next AI is still writing code. This is code we have written for 50 years. Same programming languages. Nothing is different there. That code still needs to run in production somewhere.

And if you're not familiar with, if you haven't run a large production system, even if you write a great line of code in complex systems, things fail. Like complex systems have emergent behavior that don't show up in just single lines of code. And so we still have to run this stuff.

And so who's going to fix it? Who's going to examine it? Who's going to understand those nuances when, when, if you don't have software engineers? So I think we're still going to have software engineers. And then also history repeats itself. This is not the first time I've been told that my career is over.

I'm getting in on the, on the years at this point in time, but anyone around like the DevOps transformation from 15 years ago, cloud, all of the, all the sys admins that I know that were racking boxes and booting kernels, all got pay raises and all work on much more valuable things.

Now they're much happier doing the work they're doing. So this isn't new. This is just a different level of abstraction, right? Like tractors didn't get rid of farms. They just got rid of farmhands and horses. Yeah. Like, yes, there will be change in an industry for certain, but tractors didn't get rid of farming.

We still have to farm. So vibes, anyone here a vibe coder? It's okay if you are, no shame. Anyone? And who here thinks call it would call themselves a software, a professional software engineer? Great. So vibe coding, if you're not familiar with the term, I think everyone in this room is, but I'll do a quick summary is basically letting the AI write all of the code.

and think through the code and not really examining the code at all. Just does it do what I need it to do? And then in that case, like, I just keep going and let it keep going. Keep going. Don't examine the code and edit it that way. But that's not what we're talking about.

I'm talking about how can we write code that's for production. And when I say production, I mean, you have four nines. You even know what four nines means. That's production code. You have thousands of users, gigabytes of data. We're talking software that runs the internet today. Vibes don't cut that.

Because there's a lot of, like, nuances on what goes into code. So first, let's clear up one misconception. Code is not the job. In the same way that blueprints are not the job of an architect. That's an artifact of being an architect. The artifact of being a software developer is code.

Sure, yeah, I have to output some code. But I make thousands of decisions about what my software is supposed to do. What kind of -- what I'm writing. What packages I'm bringing in. So let's stop conflating generating code with the art and the craft of doing software engineering. Those are different things.

And so, yes, LLMs are great at producing code. But are they -- is that the same as writing production software? So a very smart person that started Stack Overflow, Jeff Atwood, rest in peace Stack Overflow, said the best code is no code at all. And I think that's true because every line of code comes with a burden, right?

I have to maintain that code. I have to debug that code. So every line of code I generate, I have to be responsible for. And so we've been -- we spent so much time thinking about, like, how much code can AI generate? Who cares how much it can generate?

The more it generates, the worse off I end up being -- worse off the system ends up being. We want to put as little code as possible in there. Because all codes have trade-offs, right? Like, we recognize this package has this performance characteristic. This methodology has this. The best example I can give you -- I'll try to get through so you can all get back to snacks -- is the difference between, like, a monolith, a microservices architecture, and an adventure system.

How many decisions go into those kinds of architectures and the choices you have to make to do the same thing? Like, if you've ever -- if you built a flight booking system in all three of these, you have thousands of individual decisions that have to get made. LLMs don't make decisions.

They generate text. They generate patterns. And at some scale, at some point, there's no pattern for my software. Anyone here run a piece of production software that's kind of a bit of a snowflake? That's a bunch of, like, idiosyncrasies in it that, like, well, only Bob knows how that works and only Jane can fix that.

She wrote it six years ago and hasn't been on the project. And that is software. So at some scale, pattern matching doesn't work anymore because all of the nuances that go into that software can't be pattern matched against. So when the software is going down -- this is my, like, trauma of, like, carrying a pager -- when the software goes down at 2 in the morning, vibes aren't going to fix the bug.

Like, someone has to diagnose that problem. So what is the work of software engineering? For me, that's changing software safely. That's been my job for 20 years is how can I make changes to software, whether that's adding new functionality or changing existing code. And how do I do it safely so that software doesn't go down so my users get the thing that they're getting, the widget ships, that, you know, data is secure.

So how do we -- how have we done that so far in the industry? We've done lots of different things to solve that problem. One, there's just my own knowledge. I have to, like, learn a shit ton about a code base before I can make changes safely. We use virtual control to do that.

We know testing. You know, like, that's why we write tests, to catch -- if, like, if I change this, did this thing break over here? We use type systems. We use deployment strategies. Can AI start to help us? Can context that an AI has about and can understand more of the code base help us?

Probably. You know, at Augment, we believe that context is, like, the most important part of all AI generation in code. So, yeah, we think we can solve that problem. That doesn't change that I still have to care about production. So let's just assume that we're writing code. Like, okay, the world -- the future is here.

I have to use AI. The thing I find the most interesting about this space, I've been in a long -- in this -- in DevTools for a long time, is professional software engineers are the last people I see adopting AI. And I've never seen that before. Like, I've seen version control systems change.

I was at GitHub in early days. So, like, massive jumps to get the cloud transformation. Pick your -- pick your innovation. And developers are like, hell yes, give me that new thing. I don't know why. Well, I have some hypotheses. But I don't fully understand why software developers are like, I'm not touching code.

I'm not touching AI. It's like, it can't do what it's going to do. So I want to talk about that for just a couple minutes. So a few years ago, AI coding was mostly just a pile of bricks. Like, it kind of worked, but really didn't do much. About a year ago, you know, years and change when Sonic 3.5 came out.

That's really when we saw a massive explosion in AI coding because the quality substantially jumped. And then four weeks ago, if you weren't watching the news, literally every AI coding tool said, like, agents are the future. We did. Very proud of that. Very proud of that. But like, now agents are everywhere, right?

And so this transformation has happened very, very quickly. And so I want to talk about how we can talk -- do software engineering with this new future that we're seeing. So how do you build software that's easy for AI to write? A couple tips for you. Have some documented standards and practices, right?

Like, what do you use? Every code base I know is in some sort of flux, right? So like, are we using this package or this package? Okay, well, document that. Let the AI know that this is the next direction of your code base. Have reproducible environments, right? Like, can you easily spin up a developer environment?

Is your developer environment very bespoke and unique? You want to have it reproducible. We want to have easy testing, right? Like, can you run your tests locally? Can -- is that fast? You know, that's pretty standard stuff. Establish clear boundaries of what you're going to do. You're never going to give AI the idea of, like, extract this module using the Strangler pattern.

What does that even mean to an AI? Like, that's a whole -- you have to give clear boundaries of what you're trying to build and how to get the AI to do it. And lastly, like, have clearly defined tasks and work. Because AI is going to be as -- you know, I wouldn't give any engineer on my team, whether they're senior, staff, junior, just a vague task of, like, could you make this button do something different?

Like, I see too many engineers prompting in that way. And what's interesting about this to me is, like, this just sounds like software engineering. Right? Like, if we don't -- ideally, your software engineering stack looks -- has these qualities. And if it doesn't, you're like, our productivity sucks. Because we have, you know, bespoke testing infrastructure.

We have bespoke developer environments. You have to give AI the same tools that engineers need. Because it's doing exactly the same job. It's writing code. I've never one-shotted a piece of code in my life personally. Like, I've always made mistakes in the code I write. I run a test.

It fails. I've got a linter. It fixes it. Whatever the thing is. But we've had this expectation that AI can, like, write perfect code. And I don't know why. And so when you're thinking about adopting AI as a software engineer, you have to make sure that your systems work like you would expect any other engineer to work.

Because that's how it's writing code. The next thing is code review is by far the most important skill. I think we've probably forgotten that skill as an industry. We probably should have been interviewing for code review and not, like, here's this esoteric leak code problem that you can solve.

But, like, can you read somebody else's code and comment on why it's good or bad? I think this is going to become far more important as agents are writing more and more code. And our code review tools today, frankly, suck. Like, I'm getting a list of changed files. It's lexicographically sorted.

So, like, great. File A changed. Let me read what changed in file A. What changed in file B? That's not a way to think about how software changes. It's in the order of the files. And so I think we're going to see a pretty big explosion on the way code review can happen.

And that's the skill we need to be interviewing for. You need to be, like, kind of brushing up on. All right. I'm running quick on time. So I do want to give you a couple highlights of, like, if you are one of those software engineers that's like, I don't, you know, I'm not, I don't trust AI.

I need some tips. I want to leave you a few of these. Feel free to take a screenshot. One, the most important thing I can imagine is AI talks like a human, but it's actually a machine. I had this interaction with AI the other day where it was like, you know, I was yelling at it because that's what you do at AI when it doesn't do what you want it to do.

And it's like, oh, I'm sorry. I just scanned that file. I didn't read it. I'm like, the fuck does that even mean? What, how does a piece of software scan a file? It just, like, skimmed it? Like, that's not a thing in software. It's because LLMs are trained on all the data in the world.

Right? There are thousands of emails that it has read that said, like, oh, sorry, I didn't read your document thoroughly. I actually just skimmed it. It's like, oh, that's what I should say when someone's yelling at me about not reading a file. Right? And so we have, we have to distrust some of the things that LLMs are saying they're doing because it's not actually doing that.

Don't forget, it's just generating text. It's not always doing exactly what it's outputting in that text. So keep that in mind when you're reading through LLM output. Let's see, what other quick tips can I give you? Sometimes code is just different. It's okay if the LLM outputs code differently than you would.

I can't expect it to produce the code that I would exactly. In the same way, like, the person that sits next to me also writes code a little differently than I do. And so just accept that that's okay. And if you want to force it to write code like you, you can spend that energy, but know the difference.

Is the code better or is it just different? And so let go of some of that, like, this is how, you know, we, this is why we have linters. This is why we have rule systems. That's why we have style guides. So we can stop arguing about, like, is that how you define a function?

Or is this how you define a function? Let some of that go if you can. Write a rules file. You know, give it, tell it what you want it to do. Have a file. I always start all of my projects with, like, here's the stack I'm using. Here's the, like, guidelines I want you to use.

And that's always ends up being part of the context I send with the LLM. And then lastly, I like to define, create, refine loop. Expect, create a document of some kind. Have the LLM help you generate it. Like, I'm doing this thing. Write a markdown file that would, like, lay out the plan.

Here's my plan. Save that as a markdown file. Then use that again in your context. Have it create the thing. Have the agent run against that file. You know, you can make your edits to it. And then refine it. Then you're going in with, you know, code completions or whatever.

Like, just tweak the things that you want to tweak. And just do that loop over and over and over again. Make a plan. Have it create it. And then just make your tweaks. And you'll get a lot more comfortable. One, how to prompt the LLM to get the code you want to do.

As well as, that's just a very efficient way of coding. And if you've let go of that code has to be how I would write it. Not just good functional code. Then you can get a lot more productive in that way. Thanks again. I blasted through that. I'm happy to talk.

Augment is in the expo hall. I'll be around there to chat or over here. I hope to see you here next year. Hope we're all here next year. I think the jobs aren't going anywhere. But happy coding. Really appreciate it. I'll see you next week. We'll see you next week.

We'll see you next week. We'll see you next week. We'll see you next week. We'll see you next week. Bye.