Back to Index

Prompting 101 | Code w/ Claude


Chapters

0:0 Introduction
1:28 Setting the stage
3:57 Prompt structure
5:40 Task context
8:47 Background detail
13:7 Examples
15:55 Reminders
20:4 Output Formatting

Transcript

Hi everyone, thank you for joining us today for Prompting 101. My name is Hannah, I'm part of the Applied AI team here at Anthropic, and with me is Christian, also part of the Applied AI team. And what we're gonna do today is take you through a little bit of prompting best practices, and we're gonna use a real world scenario and build up a prompt together.

So a little bit about what prompt engineering is. Prompt engineering, you're all probably a little bit familiar with this, this is the way that we communicate with a language model and try to get it to do what we want. So this is the practice of writing clear instructions for the model, giving the model the context that it needs to complete the task, and thinking through how we want to arrange that information in order to get the best result.

So there's a lot of detail here, a lot of different ways you might want to think about building out a prompt, and as always the best way to learn this is just to practice doing it. So today we're gonna go through a hands-on scenario. We're gonna use an example inspired by a real customer that we worked with.

So we've modified what the actual customer asked us to do, but this is a really interesting case of trying to analyze some images and get factual information out of the images and have Claude make a judgment about what content it finds there. And I actually do not speak the language that this content is in, but luckily Christian and Claude both do.

So I'm going to pass it over to Christian to talk about the scenario and the content. So for this example that we have here, it's intended, so to set the stage, imagine you're working for a Swedish insurance company and you deal with car insurance claims on a daily manner.

And the purpose of this is that you have two pieces of information. We're going to these in detail as well but visually you can see on the left hand side we have a car accident report form and just detailing out what transpired before the accident actually took place. And then finally we have a sort of human drawn sketch of how the accident took place as well.

So these two pieces of information is what we're going to try to pass on to Claude. And to begin with we could just take these two and throw them into a console and just see what happens. So if we transition over to your console as well we can actually do this in a real manner.

And in this case here you can see we have our shiny beautiful Anthropic console. We're using the new Claude 4 solid model as well. In this case setting temperature zero and having a huge max token budget as well just helping us make sure that there's no limitations to what Claude can do.

In this case you can see you have a very simple prompt just setting the stage of what Claude is supposed to do. In this case mentioning that this is intended to review an accident report form and eventually also determine what happened in an accident and who's at fault. So you can see here with this very simple prompt if I just run this.

Let me go to preview. We can see here that Claude thinks that this is in relation to skiing accident that happened on a street called Schapangatham. It's a very common street in Sweden and in many ways you can sort of understand this innocent mistake in the sense that in our prompt we actually haven't done anything to set the stage on what is actually taking place here.

So this sort of first guess is not too bad but we still know there's a lot of intuition that we can bake into Claude. So we switch back to slides. You can see here that in many ways prompt engineering is a very iterative empirical science. In this case here we could almost have a test case where Claude is supposed to make sure it understands it's in a car or vehicular environment, nothing to do with skiing and in that way you iteratively build upon your prompt to make sure it's actually tackling the problem you're intending to solve.

And to do so we go through some best practices of how we at Anthropic break this down internally and how we recommend others to do so as well. So we're going to talk about some best practices for developing a great prompt. First we want to talk a little bit about what a great prompt structure looks like.

So you might be familiar with kind of interacting with a chatbot, with Claude, going back and forth, having a more kind of conversational style interaction. When we're working with a task like this, we're probably using the API as well. And we kind of want to send one single message to Claude and have it nail the task the first time around without needing to kind of move back and forth.

So the kind of structure that we recommend is setting the task description up front. So telling Claude what are you here to do? What's your role? What task are you trying to do? Then we provide content. So in this case it's the images that Christian was showing, the form and the drawing of the accident and how they occurred.

That's our dynamic content. This might also be something you're retrieving from another system depending on what your use case is. We're going to give some detailed instructions to Claude. So almost like a step-by-step list of how we want Claude to go through the task and how we want it to tackle the reasoning.

We may give some examples to Claude. Here's an example of some piece of content you might receive. Here's how you should respond when given that content. And at the end, we usually recommend repeating anything that's really important for Claude to understand about this task. Kind of reviewing the information with Claude, emphasizing things that are extra critical, and then telling Claude, okay, go ahead and do your work.

So here's another view. This has a little bit more detail, a little bit more of a breakdown. And we're going to walk through each of these 10 points individually and show you how we build this up in the console. So the first couple things, Christian's going to talk about the task context and the tone context.

Perfect. So yeah, if we begin with the task context, as you realize when I went through a little demo there, we didn't have much elaborating what scenario Claude was actually working within. And because of that, you can also tell that Claude doesn't necessarily need to guess a lot more on what you actually want from it.

So in our case, if we want to break that down, make sure we can give more clear-cut instructions, and also make sure we understand what's the task that we're asking Claude to do. Secondly, as well, we also make sure we add a little bit of tone into it all.

Key thing here is we want Claude to stay factual and to stay confident. So if Claude can't understand what it's looking at, we don't want it to guess and just sort of mislead us. We want to make sure that any assessment, and in our case, we want to make sure that we can understand who's at fault here.

We want to make sure that assessment is as clear and as confident as possible. If not, we're sort of losing track of what we're doing. So if we transition back to the console, we can jump to a V2 that we have here. So I'll just navigate to V2. And you can see here, I'll also just illustrate the data, because we didn't really do that last time around, just to really highlight what we're looking at.

So what we're seeing here, this is the car accident report form, and it's just 17 different checkboxes going through what actually happened. You can see there's a vehicle A and vehicle B, both on the left and right hand side. And the main purpose of this is that we want to make sure that Claude can understand this manually generated data to assess what's actually going on.

And that is corroborated by, if I navigate back here, to this sketch that we can highlight here as well. In this case, the form is just a different data point for the same scenario. And in this case here, we want to bake in more information into our version 2.

And by doing so, I'm actually elaborating a lot more on what's going on. So you can see here, I'm specifying that this AI system is supposed to help a human's claims adjuster. That's reviewing car accident report forms in Swedish as well. You can see here, we're also elaborating that there's a human-drawn sketch of the incident.

And that you should not make an assessment if it's not actually fully confident. And that's really key because if we run this, you'll see that, and you can see the same settings as well. Claude 4, our new shiny model, zero temperature as well. If we run this, we can see here what actually happens.

In this case, Claude's able to pick up that now it's related to car accidents, not skiing accidents, which is great. You can see it's able to pick up that vehicle A was marked on checkbox one, and then vehicle B was on 12. And if we scroll down though, we can still tell that there's some information missing for Claude to make a fully confident determination of who's at fault here.

And this is great. This is pertaining to your task that you have set. Make sure you don't make any claims that aren't factual, and make sure you only set things when you're in your confidence. But there's a lot of information we're still missing here regarding the form, what the form actually entails, and a lot of information is what we want to bake into this LLM application as well.

And the best way of doing so is actually adding it to the system prompt, which Hannah will elaborate on. So back in the slides, we have the next item we're going to add to the prompt. And this is background detail, data, documents, and images. And here, as Christian was saying, we actually know a lot about this form.

The form is going to be the same every single time. The form will never change. And so this is a really great type of information to provide to Claude, to tell Claude, here's the structure of the form you'll be looking at. We know that will not ever alter between different queries.

The way the form is filled out will change, but the form itself is not going to change. And so this is a great type of information to put into the system prompt. Also a great thing to use prompt caching for. If you're considering using prompt caching, this will always be the same.

And what this will help Claude do is spend less time trying to figure out what the form is the first time it sees the form each time. And it's going to do a better job of reading the form because it already knows what to expect there. So another thing I want to touch on here is how we like to organize information in prompts.

So Claude really loves structure, loves organization. That's why we recommend following kind of a standard structure in your prompts. And there's a couple other tools you can use to help Claude understand the information better. I also just want to mention all of this is in our docs with a lot of really great examples.

So definitely take pictures, but if you forget to take a picture, don't worry. All of this content is online with lots of examples and definitely encourage you guys to check it out there too. Anyway, so some things you can use delimiters like XML tags. Also markdown is pretty useful to Claude, but XML tags are nice because you can actually specify what's inside those tags.

So we can tell Claude, here's user preferences. Now you're going to read some content and these XML tags are letting you know that everything wrapped in those tags is related to the user's preferences. And it helps Claude refer back to that information, maybe at later points in the prompt.

So I want to show back in the console how we actually do this in this case. And Christian's going to pull up our version 3. So we're keeping everything about the other part of the user prompt the same. And we've decided in this case to put this information in the system prompt.

You can try this different ways. We're doing it in the system prompt here. And we're going to tell Claude everything it needs to know about this form. So this is a Swedish car accident form. The form will be in Swedish. It'll have this title. It'll have two columns. The columns represent different vehicles.

We'll tell Claude about each of the 17 rows and what they mean. You might have noticed when we ran it before Claude was reading individually each of the lines to understand what they are. We can provide all of that information up front. And we're also going to give Claude a little bit of information about how this form should be filled out.

This is also really useful for Claude. We can tell it things like, you know, humans are filling this form out basically. So it's not going to be perfect. People might put a circle. They might scribble. They might not put an X in the box. There could be many types of markings that you need to look for when you're reading this form.

We can also give Claude a little bit of information about how to interpret this or what the purpose or meaning of this form is. And all of this is context that is hopefully really going to help Claude do a better job analyzing the form. So if we run it, everything else is still the same.

So we've kept the same user prompt down here. Oh, your scroll is backwards from mine. We have the same user prompt here, still asking Claude to do the same task, same context. And we'll see here that it's spending less time. It's kind of narrating to us a little bit less about what the form is because it already knows what that is.

And it's not concerned with kind of bringing us that information back. It's going to give us a whole list of what it found to be checked, what the sketch shows. And here Claude is now becoming much more confident with this additional context that we gave to Claude. Claude now feels it's appropriate to say vehicle B was at fault in this case based on this drawing and based on this sketch.

So already we're seeing some improvement in the way Claude is analyzing these. I think we could probably all agree if we looked at the drawing and at the list that vehicle B is at fault. So we'd like to see that. So we're going to go back to the slides and talk about a couple of other items that we're not really using in this prompt, but can be really helpful to building up your prompt and making it work better.

Exactly. I think one thing that we really highlight is examples. I think examples or few shot is a mechanism that really is powerful in steering Claude. So you can imagine this in quite a non trivial way as well. So imagine you have scenarios, situations, even in this case, concrete accidents have happened that are tricky for Claude to get right.

But you with your human intuition and your human label data is able to actually get to the right conclusion. Then you can bake that information into the system from itself by having clear cut examples of a, the data that it's supposed to look at. So you can have visual examples.

You can just base 64 encode an image and have that as part of the data that you're passing along into the examples. And then on top of that, you can have the sort of depiction or description rather of how to break that down and understand it. This is something we really highlight and emphasize in how you can sort of push the limits of your LLM application is by baking in these examples into system prompt.

And this again is sort of the empirical science of prompt engineering is that you sort of always want to push the limits of your application and get the feedback loop in where it's going wrong and try to add that to the system prompt. So that next time when an example that sort of mimics that takes place, it's able to actually reference it in an example set.

You can see here as well, this is just a little example of how we do this. Again, really emphasizing the sort of XML structure that we enjoy. It gives a lot of structure to Claude, it's what it's been fine-tuned on as well. And it works perfectly well for this example.

And in our case, we're not doing this just because it's a simple demo. But you can realistically imagine if you were building this for an insurance company, you would have tens, maybe even hundreds of examples that are quite difficult, maybe in the gray, that you'd like to make sure that Claude actually has some basis in to make the verdict next time.

Another topic we really want to highlight, which we're not doing in this demo, is conversation history. It's in the same vein as examples. We use this to make sure that enough context-rich information is at Claude's disposal when Claude's working on your behalf. In our case now, this isn't really a user-facing LLM application.

It's more something happening in the background. You can imagine for this insurance company, they have this automated system, some data is generated out of this, and then you might have a human in the loop towards the end. If you were to build something much more user-facing, where you'd have a long conversation history that would be relevant to bring in, this is a perfect place in the system prompt to include because it enriches the context that Claude works within.

In our case, we haven't done so. But what we do is, and the next step, is try to make sure we give a concrete reminder of the task at hand. So now we're going to build out the final part of this prompt for Claude, and that's coming back to the reminder of what the immediate task is, and giving Claude a reminder about any important guidelines that we want it to follow.

Some reasons that we may do this are, A, preventing hallucinations. So we want Claude to not invent details that it's not finding in this prompt, right? Or not finding in the data. If Claude can't tell which form is checked, we don't want Claude to take its best guess, or invent the idea that a box might be checked when it's not.

If the sketch is unintelligible, the person did a really bad job drawing this drawing, and even a human would not be able to figure it out, we want Claude to be able to say that. And so these are some of the things we'll include in this final reminder and kind of wrap-up step for Claude.

Remind it to do things like answer only if it's very confident. We could even ask it to refer back to what it has seen in the form anytime it's making a factual claim. So if it wants to say, "Vehicle B turned right," it should say, "I know this based on the fact that box 2 is clearly checked," or whatever it might be.

We can kind of give Claude some guidelines about that. So if we go back to the console, we can see... the next version of the prompt. And we're going to keep everything the same here in the system prompt. So we're not changing any of that background context that we gave to Claude about the form, about how it's going to fill everything out.

We're not changing anything else about the context and the role. We're just adding this detailed list of tasks. And this is how we want Claude to go about analyzing this. And a really key thing that we found here as we were building this demo and when we were working on the customer example is that the order in which Claude analyzes this information is very important.

And this is analogous to the way you might think about doing this if you were a human. You would probably not look at the drawing first and try to understand what was going on, right? It's pretty unclear. It's a bunch of boxes and lines. We don't really know what that drawing is supposed to mean without any additional context.

But if we have the form and we can read the form first and understand that we're talking about a car accident and that we're seeing some checkboxes that indicate what vehicles were doing at certain times, then we know a little bit more about how to understand what might be in the drawing.

And so that's the kind of detail that we're going to give Claude here is to say, hey, first, go look at the form. Look at it very carefully. Make sure you can tell what boxes are checked. Make sure you're not missing anything here. Make a list for yourself of what you see in that and then move on to the sketch.

So after you've kind of confidently gotten information out of the form and you can say what's factually true, then you can go on and think about what you can gain from that sketch. Keeping in mind your understanding of the accident so far. So whatever you've learned from the form, you're trying to match that up with the sketch.

And that's how you're going to arrive at your final assessment of the form. And we'll run it. And here you can see one behavior that this produced for Claude. Because I told it to very carefully examine the form, it's showing me its work as it does that. So it's telling me each individual box, is the box checked?

Is it not checked? And so this is one thing you'll notice as you do prompt engineering. In our previous prompts, we were kind of letting Claude decide how much it wanted to tell us about what it saw on the form. Here, because I've told it carefully examine each and every box, it's very carefully examining each and every box.

And that might not be what we want in the end. So that's something we might change. But it's also going to give me these other things that I asked for in XML tags. So a nice analysis of the form, the accident summary so far, it's going to give me a sketch analysis.

And it's going to continue to say that vehicle B appears to be clearly at fault. In this example, it's a pretty simple example, with more complicated drawings, less clarity in the forms, this kind of step-by-step thinking for Claude is really impactful in its ability to make a correct assessment here.

So I think we'll go back to the slides. And Christian's going to talk about a last kind of piece that we might add to this to really make it useful for a real-world task. Indeed. Thank you so much. So as Hannah mentioned, we sort of set the stage in this prompt to make sure that Claude's really acting on our behalf in the right manner.

And a key step that we also add towards the end of this prompt, what I'm going to show you in a second, is a simple sort of guidelines or reminder part as well. So just strengthening and reinforcing exactly what we want to get out of it. And one important piece is actually output formatting.

You can imagine if you're a data engineer working on this LLM application, all this sort of fancy preamble is great, but at the end of the day you want your piece of information to be stored in, let's say, your SQL database, wherever you want to store that data, and the rest of it that is necessary for Claude to sort of give its verdict isn't really that necessary for your application.

You want the nitty-gritty information for your application. So if we transition back to the console, you'll see here that we just added a simple importance guidelines part. And again, this is just reinforcing the sort of mechanical behavior that we want out of Claude here. We want to make sure that the summary is clear, concise, and accurate.

We want to make sure that nothing is sort of impeding in Claude's assessment apart from the data it's analyzing. And then finally, when it comes to output formatting, in my case here, I'm just going to ask Claude to wrap its final verdict. All other stuff I'm actually going to ignore from my application and just look at what it's actually assessing.

And that is, I can use this if I want to build some sort of analytics tool afterwards as well, or if I just want a clear-cut determination, this is the way I can do so. So if I just run this here, you'll see it's going through the same sort of process that we've seen before.

In this case, it's much more succinct because we've asked it to summarize its findings in a much more straightforward manner. And then finally, towards the end, you'll see that it'll wrap my output in these final verdict XML tags. So you can see that during this demo, we've gone from a skiing accident to sort of unconfident, insecure outputs, from perhaps a car accident in the second version to now a much more strictly formatted, confident output that we can actually build an LLM application around and actually help a real-world car insurance company, for example.

Finally, if we transition back to the slides, another key way of shaping Claude's output is actually putting words in Claude's mouth, or as we call it, pre-filled responses. You can imagine that parsing XML tags is nice and all, but maybe you want a structured JSON output to make sure that it's JSON serializable and you can use this in a subsequent call, for example.

This is quite simple to do. You can just add that Claude needs to begin its output with a certain format. This could be, for example, a open squarely bracket, for example, or even in this case that we see in front of us, this would be an XML tag for itinerary.

In our case, it could also be a final verdict XML tag. And this is just a great way of, again, shaping how Claude is supposed to respond without all the preamble if you don't want that, even though that is also key in shaping its output to make sure that Claude is reasoning through the steps that we wanted.

So in our case here, we would just wrap it in the final verdict and then parse it afterwards. But you can use pre-fill as well. Now, finally, one step that I would like to highlight here as well is that both Claude 3.7 and especially Claude 4, of course, is a hybrid reasoning model, meaning that there's extended thinking at your disposal.

And this is something we want to highlight because you can use extended thinking as a crutch for your prompt engineering. Basically, you can enable this to make sure that Claude actually has time to think. It adds this thinking tags and the scratch pad. And the beauty of that is that you can actually analyze that transcript to understand how Claude is going about that data.

So as we mentioned, we have these check boxes where it goes through step by step of the scenario that transpired for the accident. And in many ways there, you can actually try to help Claude in building this into the system prompt itself. It's not only more token efficient, but it's a good way of understanding how these intelligent models that don't have our intuition actually go about the data that we provide them.

And because of that, it's quite key in actually trying to break down how your system prompt can get a lot better. And with that said, I think I'd like to thank all of you for coming today. We'll be around as well. So if you have any questions on prompting, please go ahead.

I know there's a prompting. If you want to learn more about prompting, in an hour we have prompting for agents. And right now we have an amazing demo of Claude plays Pokemon. So don't go anywhere for that. And as Christian said, we'll be around all day. So I know we didn't have time for Q&A in this session.

But please come find us if you want to chat. And thank you guys for coming. Thank you so much. Thank you so much. Thank you. Thank you. Thank you.