Back to Index

LangSmith 101 for AI Observability | Full Walkthrough


Chapters

0:0 LangChain's LangSmith
0:24 LangSmith Setup
2:37 LangSmith Tracing
4:54 Custom LangSmith Traceables
8:18 LangSmith Conclusion

Transcript

Okay so now we're going to take a look at AI observability using Linesmith. Now Linesmith is another piece of the broader Lineshain ecosystem. Its focus is on allowing us to see what our LLMs, agents etc are actually doing and it's something that we would definitely recommend using if you are going to be using Lineshain and Lineshain.

Now let's take a look at how we would set Linesmith up which is incredibly simple. So I'm going to open this in Colab and I'm just going to install the prerequisites here. You'll see these are all the same as before but we now have the Linesmith library here as well.

Now we are going to be using Linesmith throughout the course so in all the following chapters we're going to be importing Linesmith and that will be tracking everything we're doing but you don't need Linesmith to go through the course. It's an optional dependency but as mentioned I would recommend it.

So we'll come down to here and the first thing that we will need is the Lineshain API key. Now we do need an API key but that does come with a reasonable free tier. So we can see here they have each of the plans and this is the one that we are by default on.

So it's free for one user up to 5000 traces per month. If you're building out an application I think it's fairly easy to go beyond that but it really depends on what you're building. So it's a good place to start with and then of course you can upgrade as required.

So we would go to smith.linechain.com and you can see here that this will log me in automatically. I have all of these tracing projects. These are all from me running the various chapters of the course. Yours if you do use Linesmith throughout the course your Linesmith dashboard will end up looking something like this.

Now what we need is an API key. So we go over to settings. We have API keys and we're just going to create an API key. Because we're just going through some personal learning right now I would go with personal access token. We can give a name or description if you want.

Okay and we'll just copy that and then we come over to our notebook and we enter our API key there. And that is all we actually need to do. That's absolutely everything. I suppose the one thing to be aware of is that you should set your Linesmith project to whatever project you're working within.

So of course within the course we have individual project names for each chapter but for your own projects of course you should make sure this is something that you recognize and is useful to you. So Linesmith actually does a lot without us needing to do anything. So we can actually go through.

Let's just initialize our LLM and start invoking it and seeing what Linesmith returns to us. So we'll need our OpenAI API key. Enter it here and then let's just invoke hello. Okay so nothing has changed on this end. Right so us running code there's nothing different here. However now if we go to Linesmith I'm going to go back to my dashboard.

Okay and you can see that the the order of these projects just changed a little bit and that's because the most recently used project i.e this one at the top Linesmith course Linesmith OpenAI which is the current chapter we're in. That was just triggered. So I can go into here and I can see oh look at this.

So we actually have something in the Linesmith UI and we didn't all we did was enter our Linesmith API key. That's all we did and we set some environment variables and that's it. So we can actually click through to this and it will give us more information. So you can see what was the input what was the output and some other metadata here.

You see you know it's not that much in here however when we do the same for agents we'll get a lot more information. So I can even show you a quick example from the future chapters. If we come through to agents intro here for example and we just take a look at one of these.

Okay so we have this input and output but then on the left here we get all this information. And the reason we get all this information is because agents are they're performing multiple LLM calls etc etc. So there's a lot more going on. So we can see okay what is the first LLM call and then we get these tool use traces.

We get another LLM call and the tool use and another LLM call. So you can see all this information which is incredibly useful and incredibly easy to do because all I did when setting this up in that agent chapter was simply set the API key and the environment variables as we have done just now.

So you get a lot out of very little effort with LLM which is great. So let's return to our LLM project here and let's invoke some more. Now I've already shown you you know we're going to see a lot of things just by default but we can also add other things that LangSmith wouldn't typically trace.

So to do that we will just import a traceable decorator from LangSmith and then let's make these just random functions traceable within LangSmith. Okay so we'll run those. We have three here so we're going to generate a random number. We're going to modify how long a function takes and also generate a random number and then in this one we're going to either return this no error or we're going to raise an error.

So we're going to see how LangSmith handles these different scenarios. So let's just iterate through and run those a few times. So we're just going to run each one of those 10 times. Okay so let's see what happens. So they're running. Let's go over to our LangSmith UI and see what is happening over here.

So we can see that everything is updating. We're getting that information through and we can see if we go into a couple of these we can see a little more information. So the input and the output took three seconds. see random error here. In this scenario random error passed without any issues.

Let me just refresh the page quickly. Okay so now we have the rest of that information and we can see that occasionally if there is an error from our random error function it is signified with this and we can see the traceback as well that was returned there which is useful.

Okay so we can see if an error has been raised we have to see what that error is. We can see the various latencies of these functions. So you can see that varying throughout here. We see all the inputs to each one of our functions and then of course the outputs.

So we can see a lot in there which is pretty good. Now another thing that we can do is we can actually filter. So if we come to here we can add a filter. Let's filter for errors. that would be value error. And then we just get all of the cases where one of our functions has returned or raised an error or a value error specifically.

Okay so that's useful. And then yeah there's various other filters that we can add there. So we could add a name for example if we wanted to look for the generate string delay function only. We could also do that. Okay and then we can see see the varying latencies of that function as well.

Cool. So we have that. Now one final thing that we might want to do is maybe we want to make those function names a bit more descriptive or easy to search for for example. And we could do that by setting the name of the traceable decorator like so. So let's run that.

Run this a few times. And then let's jump over to Linesmith again going to Linesmith project. Okay and you can see those coming through as well. So then we could also search for those based on that new name. So what was it? Chit Chantamaker like so. And then we can see all that information being streamed through to Linesmith.

So that is our introduction to Linesmith. There is really not all that much to go through here. It's very easy to help and as we've seen it gives us a lot of observability into what we are building. And we will be using this throughout the course. We don't rely on it too much.

It's a completely optional dependency so you don't want to use Linesmith. You don't need to. But it's there and I would recommend doing so. So that's it for this chapter. We'll move on to the next one. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye.

Bye. Bye. Bye. Bye. Bye. Bye. you you you