Back to Index

Conversational Memory in LangChain for 2025


Chapters

0:0 Conversational Memory in LangChain
1:12 LangChain Chat Memory Types
4:26 LangChain ConversationBufferMemory
8:23 Buffer Memory with LCEL
13:14 LangChain ConversationBufferWindowMemory
16:1 Buffer Window Memory with LCEL
22:32 LangChain ConversationSummaryMemory
25:17 Summary Memory with LCEL
30:12 Token Usage in LangSmith
32:8 Conversation Summary Buffer Memory
34:36 Summary Buffer with LCEL

Transcript

in this chapter we're going to be taking a look at conversational memory in linechain we're going to be taking a look at the core like chat memory components that have already been in linechain since the start but are essentially no longer in the library and we'll be seeing how we actually implement those historic conversational memory utilities in the new versions of linechain so 0.3 now as a pre-warning this chapter is fairly long but that is because conversational memory is just such a critical part of chatbots and agents conversational memory is what allows them to remember previous interactions and without it our chatbots and agents would just be responding to the most recent message without any understanding of previous interactions within a conversations so they would just not be conversational and depending on the type of conversation we might want to go with various approaches to how we remember those interactions within a conversation now throughout this chapter we're going to be focusing on these four memory types we'll be referring to these and i'll be showing you actually how each one of these works but what we're really focusing on is rewriting these for the latest version of langchain using the what's called the runnable with message history so we're going to be essentially taking a look at the original implementations for each of these four original memory types and then we'll be rewriting them with the the runnable memory history class so just taking a look at each of these four very quickly a constantial buffer memory is i think the simplest most intuitive of these memory types it is literally just you have your messages they come in to this object they are stored in this object as essentially a list and when you need them again it will return them to you there's nothing nothing else to it's super simple the conversation buffer window memory okay so new word in the middle of the window this works in pretty much the same way but those messages that it has stored it's not going to return all of them for you instead it's just going to return the most recent let's say the most recent three for example okay and that is defined by a parameter k conversational summary memory rather than keeping track of the entire interaction memory directly what it's doing is as those interactions come in it's actually going to take them and it's going to compress them into a smaller little summary of what has been within that conversation and as every a new interaction is coming in it's going to do that and i keep iterating on that summary and then that is going to return to us when we need it and finally we have the conversational summary buffer memory so this is it's taking so the buffer part of this is actually referring to a very similar thing to the buffer window memory but rather than it being a you know most k messages it's looking at the number of tokens within your memory and it's returning the most recent k tokens that's what the buffer part is there and then it's also merging that with the summary memory here so essentially what you're getting is almost like a list of the most recent messages based on the token length rather than the number of interactions plus a summary which would you know come at the the top here so you get kind of both the idea is that obviously this summary here would maintain all of your interactions in a very compressed form so you're you're losing less information and you're still maintaining you know maybe the very first interaction the user might have introduced themselves giving you their name hopefully that would be maintained within the summary and it would not be lost and then you have almost like a higher resolution on the most recent um k or k tokens from your memory okay so let's jump over to the code we're going into the zero four chat memory notebook okay open that in colab okay now here we are let's go ahead and install the prerequisites run all we again can or cannot use alignsmith it is up to you enter that and let's come down and start so first to just initialize our lm using 40 mini in this example again low temperature and we're going to start with conversation buffer memory okay so this is the original version of this memory type so let me uh where are we we're here so memory conversation buffer memory and we're returning messages that needs to be set to true so the reason that we set return messages true as it mentions up here is if you do not do this it's going to be returning your chat history as a string to an llm whereas well chat llms nowadays would expect message objects so yeah you just want to be returning these as messages rather than as strings okay otherwise yeah you're going to get some kind of strange behavior out from your lms if you return them strings so you do want to make sure that it's true i think by default it might not be true but this is coming this is deprecated right it does tell you here as deprecation warning this is coming from older blank chain but it's a good place to start just to understand this and then we're going to rewrite this with the runnables which is the recommended way of doing so nowadays okay so adding messages to our memory we're going to write this okay so it's just a just a conversation user ai user ai so on random chat main things to note here is i do provide my name we have the the model same right towards the start of those interactions okay so i'm just going to add all of those we do it like this okay then we can just see we can load our history like so so let's just see what we have there okay so we have a human message ai message human message right this is exactly what we i showed you just here it's just in that message format from line chain okay so we can do that alternatively we can actually do this so we can get our memory we initialize the constational buffer memory as we did before and we can actually add it directly each message into our memory like that so we can use this add user message add ai message so on and so on load again and it's going to give us the exact same thing again there's multiple ways to do the same thing cool so we have that to pass all of this into our lm again this is all deprecated stuff we're going to learn how to use properly in a moment but this is how long chain is doing in the past so to pass all of this into our lm we'd be using this conversation chain right again this is deprecated nowadays we would be using lcell for this so i just want to show you okay how this would all go together and then we would invoke okay what is my name again let's run that and we'll see what we get it's remembering everything remember so this conversation buffer memory it doesn't drop messages it just remembers everything right and honestly with the sort of high context windows of many lms that might be what you do it depends on how long you expect a conversation to go on for but you could you probably in most cases would get away with this okay so what let's see what we get um i say what is my name again okay let's see what it gives me says your name is james great thank you that works now as i mentioned all of this i just showed you is actually deprecated that's the old way of doing things let's see how we actually do this in modern or up-to-date line chain so we're going to be using this runnable with message history to implement that we will need to use lcell and for that we will need to just define prompt templates our lm as we usually would okay so we're going to set up our system prompt which is just your helpful system called zeta okay we're going to put in this messages placeholder okay so that's important essentially that is where our messages are coming from our conversation buffer memory is going to be inserted right so it's going to be that chat history is going to be inserted after our system prompt but before our most recent query which is going to be inserted last here okay so messages placeholder item that's important and we use that throughout the course as well so we use it both for chat history and we'll see later on we also use it for the intermediate thoughts that a agent would go through as well so important to remember that little thing we'll link our prompt template to our lm again if we would like we could also add in the i think we only have the query here oh we would probably also want our history as well but i'm not going to do that right now okay so we have our pipeline and we can go ahead and actually define our runnable with message history now this class or object when we are initializing it does require a few items we can see them here okay so we see that we have our pipeline with history so it's basically going to be uh you can you can see here right we have that history meshes key right this here has to align with what we provided as the meshes placeholder in our pipeline right so we have our pipeline prompt template here and here right so that's where it's coming from it's coming from meshes placeholder variable name is history right that's important that links to this then for the input meshes key here we have query that again links to this okay so both important to have that the other thing that is important is obviously we're passing in that pipeline from before but then we also have this get session history basically what this is doing is it's saying okay i need to get the list of messages that make up my chat history that are going to be inserted into this variable so that is a function that we define okay and with it within this function what we're trying to do here is actually replicate what we have with the previous conversation buffer memory okay so that's what we're doing here so it's very simple right so we have this in memory chat message history okay so that's just the object that we're going to be returning what this will do is it will set up a session id the session i use essentially like a unique identifier so that each conversational interaction within a single conversation is being mapped to a specific conversation so you don't have overlapping let's say of multiple users using the same system you want to have a unique session id for each one of those okay and what it's doing is saying okay if session id is not in the chat map which is this empty dictionary we're defined here we are going to initialize that session with an in memory chat message history okay and that's it and we return okay and all that's going to do is it's going to basically append our messages they will be appended within this chat map session id and they're going to get returned there's nothing really there's nothing else to it to be honest so we invoke our runnable let's see what we get oh i need to run this okay note that we do have this config so we have a session id that's to again as i mentioned keep different conversations separate okay so we've run that now let's run a few more so what is my name again let's see if it remembers your name is james how can i help you today james okay so it's what we've just done there is literally conversation buffer memory but for up-to-date line chain with l cell with runnables so you know the recommended way of doing it nowadays so that's a very simple example okay there's really and not that much to it it gets a little more complicated as we start thinking about the different types of memory although with that being said it's not massively complicated we're only really going to be changing the way that we're getting our interactions so let's uh let's dive into that and see how we will do something similar with the conversation buffer window memory but first let's actually just understand okay what is the conversation buffer window memory so as i mentioned near the start it's going to keep track of the last k messages so there's a few things to keep in mind here more messages does mean more tokens that send each request and if we have more tokens in each request it means that we're increasing the latency of our responses and also the cost so with the previous memory type we're just sending everything and because we're sending everything that is going to be increasing our costs going to be increasing our latency for every message especially as a conversation gets longer and longer and we don't we might not necessarily want to do that so with this conversation buffer window memory we're going to just say okay just return me the most recent messages okay so let's let's see how that would work here we're going to return the most recent four messages okay we are again make sure we've turned messages is set to true again this is deprecated this is just the old way of doing it in a moment we'll see the updated way of doing this we'll add all of our messages okay so we have this and just see here all right so we've added in all these messages there's more than four messages here and we can actually see that here so we have human message ai human ai human ai human ai right so we've got four pairs of human ai interactions there but here we don't have there's more than four pairs so four pairs will take us back all the way to here i'm researching different types of conversational memory okay and if we take a look here the most the first message we have is i'm researching different types of conversational memory so it's cut off these two here which will be a bit problematic when we ask you what our name is okay so let's just see i'm going to be using conversation chain object again again just remember that is deprecated and i want to say what is my name again let's see let's see what it says uh i'm sorry but i don't know that's your name or any personal information if you like you can tell me your name right so it doesn't actually remember so that's kind of like a negative of the conversation buffer window memory of course the uh to fix that in this scenario we might just want to increase k maybe we say around the previous eight interaction pairs and it will actually remember so what's my name again your name is james so now it remembers we've just modified how much it is remembering but of course you know there's pros and cons to this it really depends on what you're trying to build so let's take a look at how we would actually implement this with the runnable with message history okay so you know getting a little more complicated here although it is it's not it's not complicated but well we'll see okay so we have a buffer window message history we're creating a class here this class is going to inherit from the base chat message history object from linechain okay and all of our other message history objects can do the same thing before with the in-memory message object that was basically replicating the buffer memory so we didn't actually need to do anything we need to define our own class here so in this case we do so we follow the same pattern that linechain follows with this base chat message history and you can see a few of the functions here that are important so add messages and clear are the ones that we're going to be focusing on we also need to have messages which this object attribute here okay so we're just implementing the synchronous methods here if we want this to be async if we want to supply async we would have to add a add messages and a get messages and a clear as well so let's go ahead and do that we have messages we have k again we're looking at remembering the top k messages or most recent k messages only so it's important that we have that variable we are adding messages through this class this is going to be used by linechain within our runnable so we need to make sure that we do have this method and all we're going to be doing is extending the self messages list here and then we're actually just going to be trimming that down so that we're not remembering anything beyond those you know most recent k messages that we have set from here and then we also have the clear method as well so we need to include that that's just going to clear the history okay so it's not this isn't complicated right it just gives us this nice default standard interface for message history and we just need to make sure we're following that pattern okay i've included the uh this print here just so we can see what's happening okay so we have that and now for that get chat history function that we defined earlier rather than using the built-in method we're going to be using our own object which is a buffer window message history which we defined just here okay so if session id is not in the chat map as we did before we're going to be initializing our buffer window message history we're setting k up here with a default value of 4 and then we just return it okay and and that's it so let's run this we have our runnable with message history we have all of these variables which are exactly the same as before but then we also have these variables here with this history factory config and this is where if we have um new variables that we've added to our message history in this case k that we have down here we need to provide that to line training tell it this is a new configurable field okay and we've also added it for the session id here as well so we're just being explicit and have everything in that so we have that and we run okay now let's go ahead and invoke and see what we get okay so important here this history factory config that is kind of being fed through into our invoke so that we can actually modify those variables from here okay so we have config configurable session id okay we'll just put whatever we want in here and then we also have the number k okay so remember the previous four interactions i think in this one we're doing something slightly different i think we're remembering the four interactions rather than the previous four interaction pairs okay so my name is james uh we're going to go through i'm just going to actually clear this and now i'm going to start again and we're going to use the exact same add user message and ai message that we used before we're just manually inserting all that into our history so that we can then just see okay what is the result and you can see that k equals four is actually unlike before where we were having the uh saving the top four interaction pairs we're now saving the most recent four interactions not pairs just interactions and honestly i just think that's clearer i think it's weird that the number four for k would actually save the most recent eight messages right i think that's odd so i'm just not replicating that weirdness we could if we wanted to i just don't like it so i'm not doing that and anyway we can see from messages that we're returning just the most four recent messages okay we should be these four okay cool so we've just using the runnable we've replicated the old way of having a window memory and okay i'm going to say what is my name again as before it's not going to remember so we can come to here i'm sorry but i don't have access to personal information and so on and so on if you like to tell me your name it doesn't know now let's try a new one where we initialize a new session okay so we're going with idk14 so that's going to create a new conversation there and we're going to say we're going to set k to 14 okay great i'm going to manually insert the other uh messages as we did before okay and we can see all of those you can see at the top here we are still maintaining that hi my name is james message now let's see if it remembers my name your name is james okay there we go cool so that is working and we can also see so we just added this what is my name again let's just see if did that get added to our lists of messages right what is my name again nice and then we also have the response your name is james so just by invoking this because we're using the the runnable with message history it's just automatically adding all of that into our message history which is nice cool all right so that is the buffer window memory and now we are going to take a look at how we might do something a little more complicated which is the the summaries okay so when you think about the summary you know what are we doing we're actually taking the messages we're using that lm call to summarize them to compress them and then we're storing them within messages so let's see how we would actually do that so to start with let's just see how it was done in old line chain so you have conversation summary memory go through that and let's just see what we get so again same interactions right i'm just invoking invoking invoking i'm not adding these directly to the messages because it actually needs to go through a um like that summarization process and if we have a look we can see it happening okay current conversation so sorry current conversation hello there my name is james ai is generating current conversation the human introduces himself as james ai greets james warmly and expresses its readiness to chat and assist inquiring about how his day is going right so it's summarizing the the previous interactions and then we have you know after that summary we have the most recent human message and then the ai is going to generate its response okay and that continues going continues going and you see that the the final summary here is going to be a lot longer okay and it's different that first summary of course asking about his day uh imagine studies researching different types of conversational memory the ai responds enthusiastically explaining that conversational memory includes short-term memory long-term memory contextual memory personalized memory and then inquires if james is focused on a specific type of memory okay cool so we get essentially the summary is just getting uh longer and longer as we go but at some point the idea is that it's not going to keep just growing and it should actually be shorter than if you were saving every single interaction whilst maintaining as much of the information as possible but of course you're not going to maintain all of the information that you would with for example the the buffer memory right with the summary you are going to lose information but hopefully less information than if you're just cutting interactions so you're trying to reduce your token count whilst maintaining as much information as possible now let's go and ask what is my name again it should be able to answer because we can see in the summary here that i introduced myself as james okay response your name is james how is your research going okay so how's that cool let's see how we'd implement that so again as before we're going to go with that conversation summary message history we're going to be importing a system message uh we're going to be using that not for the lm that we're chatting with but for the lm that will be generating our summary so actually that is not quite correct there's create a summary not that it matters it's just the doctrine so we have our messages and we also have the lm so different different tribute here to what we had before when we initialize the conversation summary message history we need to passing in our lm we have the same methods as before we have add messages and clear and what we're doing is as messages coming we extend with our current messages but then we're modifying those okay so we construct our like instructions to make a summary okay so that is here we have the system prompt uh given the existing conversation summary and the new messages generate a new summary of the conversation ensuring to maintain as much relevant information as possible okay then we have a human message here through that we're passing the existing summary okay and then we're passing in the new messages okay cool so we format those and invoke the lm here and then what we're doing is in the messages we're actually replacing the existing history that we had before with a new history which is just a single system summary message okay let's see what we get as before we have that and get chat history exactly the same as before the only real difference is that we're passing in the lm parameter here and of course as we're passing in the lm parameter in here it does also mean that we're going to have to include that in the configurable field spec and that we're going to need to include that when we're invoking our pipeline okay so we run that pass me the lm now of course one side effect of generating summaries or everything is that way actually you know we're generating more so you are actually using quite a lot of tokens whether or not you are saving tokens or not actually depends on the length of a conversation as a conversation gets longer if you're storing everything after a little while that the token usage is actually going to increase so if in your use case you expect to have shorter conversations you would be saving money and tokens by just using this standard buffer memory whereas if you're expecting very long conversations you would be saving tokens some money by using the summary history okay so let's see what we got from there we have a summary of the conversation james introduced himself by saying hi my name james i responded warmly asking hi james and no interaction include details about token usage okay so we actually uh included everything here which we probably should not have done why did we do that uh so in here we're including all of the oh in here so using or including all of the content from the messages so i think maybe if we just do x.content for x in messages that should resolve that okay there we go so we quickly fix that so yeah before we're passing in the entire messages object which obviously includes all of this information whereas actually we just want to be passing into content so we modified that and now we're getting what we expect okay cool and then we can keep going all right so as we as we keep going the summary should get more like abstract like as we just saw here is literally just giving us the messages directly almost okay so we're getting the summary there and we can keep going we're going to add just more messages to that we'll see the you know as we'll get send those we'll get in a response send again get response we're just adding all of that invoking all that and i'll be of course adding everything into our message history okay cool so we've run that let's see what the uh latest summary is okay and then we have this so this is a summary that we have instead of our our chat history okay cool now finally let's see what's my name again we can just double check you know has my name in there so it should be able to tell us okay cool so your name is james pretty interesting so let's have a quick look over at linesmith so the reason i want to do this is just to point out okay the different essentially token usage that we're getting with each one of these okay so we can see that we have these runnable message history which are probably uh improved in naming there but we can see okay how long is each one of these taken how many tokens are they also using come back to here we have this runnable message history and this is we'll go through a few of these maybe to here i think we can see here this is that first interaction where we're using the the buffer memory and we can see how many tokens were used here so 112 tokens when we're asking what is my name again okay then we modified this to include i think it was like 14 interactions or something on those lines saying obviously increases number of tokens that we're using right so we can see that actually happening all in line stuff which is quite nice and we can compare okay how many tokens is each one of these using now this is looking at the buffer window and then if we come down to here and look at this one so this is using our summary okay so our summary with what is my name again i actually use more tokens in this scenario right which is interesting because we're trying to compress information the reason those more is because there's not there hasn't been that many interactions as the conversation length increases with the summary this total number of tokens especially if we prompt it correctly to keep that low that should remain relatively small whereas with the buffer memory that will just keep increasing and increasing as a as the conversation gets longer so useful little way of using line smith there to just kind of figure out okay in terms of tokens and costs of what we're looking at for each of these memory types okay so our final memory type acts as a mix of the summary memory and the buffer memory so what it's going to do is keep the buffer up until an n number of tokens and then once a message exceeds the n number of token limit for the buffer it is actually going to be added into our summary so this memory has the benefit of remembering in detail the most recent interactions whilst also not having limitation of using too many tokens as a conversation gets longer and even potentially exceeding context windows if you try super hard so this is a very interesting approach now as before let's try the original way of implementing this then we will go ahead and use our update method for implementing this so we come down to here and we're going to do lang chain memory import conversation summary buffer memory okay a few things here lm for summary we have the n number of tokens that we can keep before they get added to the summary and then return messages of course okay you can see again this is deprecated we use the conversation chain and then we just pass in our memory there and then we can chat okay so super straightforward first message we'll add a few more here again we have to invoke because how memory type here is using the lm to create those summaries as it goes and let's see what they look like okay so we can see for the first message here we have a human message and then an ai message then we come a little bit lower down again same thing human message is the first thing in our history here then it's a system message so this is at the point where we've exceeded that 300 token limit and the memory type here is generating those summaries so that summary comes in as a system message and we can see okay the human named james introduces himself and mentions he's researching different types of conversation memory and so on and so on right okay cool so we have that then let's come down a little bit further we can see okay so the summary there okay so that's what we that's what we have that is the implementation for the old version of this memory again we can see it's deprecated so how do we implement this for our more recent versions of langchain and specifically 0.3 well again we're using that runnable with message history and it looks a little more complicated than we were getting before but it's actually just you know it's nothing too complex we're just creating a summary as we didn't with the previous memory type but the decision for adding to that summary is based on in this case actually the number of messages so i didn't go with the the langchain version where it's a number of tokens i don't like that i prefer to go with messages so what i'm doing is saying okay less k messages okay once we exceed k messages the measures beyond that are going to be added to the memory okay cool so let's see we first initialize our conversation summary buffer message history class with lm and k okay so these two here so lm of course to create summaries and k is just the limit of number of messages that we want to keep before adding them to the summary or dropping them from our messages and adding them to the summary okay so we will begin with okay do we have an existing summary so the reason we set this in none is we can't extract the summary the existing summary unless it already exists and the only way we can do that is by checking okay do we have any messages if yes we want to check if within those messages we have a system message because we're we're doing the same structure as what we have up here where the system message that first system message is actually our summary so that's what we're doing here we're checking if there is a summary message already stored within our messages okay so we're checking for that if we find it we'll just do we have this little print statement so we can see that we found something and then we just make our existing summary i should actually move move this to the first instance here okay so that existing summary will be set to the first message okay and this would be a system message rather than a string cool so we have that then we want to add any new messages to our history okay so we're sending the history there and then we're saying okay if the length of our history is exceeds the k value that we set we're going to say okay we found that many messages we're going to be dropping the latest it's going to be the latest two messages this i will say here one thing or one problem with this is that we're not going to be saving that many tokens if we're summarizing every two messages so what i would probably do is in in an actual like production setting i would probably say let's go up to 20 messages and once we hit 20 messages let's take the previous 10 we're going to summarize them and put them into our summary alongside any you know previous summary that already existed but in in you know this is also fine as well okay so we say we found those messages we're going to drop the latest two messages okay so we pull the the oldest messages out i should say not the latest it's the oldest not the latest i want to keep the latest drop the oldest so we pull out the oldest messages and keep only the most recent messages okay then i'm saying okay if we if we don't have any old messages to summarize we don't do anything we just return okay so this indicates that this has not been triggered we would hit this but in the case this has been triggered and we do have old messages we're going to come to here okay so this is we can see our system message prompt template saying giving the existing conversation summary in the new messages generate a new summary of the conversation ensuring to maintain as much relevant information as possible so if we want to be more conservative with tokens we could modify this prompt here to say keep the summary to within the length of a single paragraph for example and then we have our human message prompt template which can say okay here's the existing compensation summary and here are our new messages now new messages here is actually the old messages but the way that we're framing it to the llm here is that we want to summarize the whole conversation right it doesn't need to have the most recent messages that we're storing within our buffer it doesn't need to know about those that's irrelevant to the summary so we just tell it that we have these new matches and as far as this lm is concerned this is like the full set of interactions okay so then we would format those and invoke our lm and then we'll print out our new summary so we can see what's going on there and we would prepend that new summary to our conversation history okay and and this will work so we can just prepend it like this because we've already popped where was it up here if we have an existing summary we already popped that from the list so it's already been pulled out of that list so it's okay for us to just we don't need to say like we don't need to do this because we've already dropped that initial system message if it existed okay and then we have the clear method as before so that's all of the logic for our conversational summary buffer memory okay so we can see what's going on here we can see what's going on here we define our get chat history function with the lm and k parameters there and then we'll also want to set the configurable fields again so that is just going to be called session id lm and k okay so now we can invoke the k value to begin with is going to be four okay so we can see no old messages to update summary with that's good let's invoke this a few times and let's see what we get okay so no old messages to update summary with found six messages dropping the oldest two and then we have new summary in the conversation james introduced himself and first is interested in researching different types of conversational memory right so you can see there's quite a lot in here at the moment so we would definitely want to prompt the lm the summary lm to keep that short otherwise we're just getting a ton of stuff right but we can see that that is you know it's working it's functional so let's go back and see if we can prompt it to be a little more concise so we come to here ensuring to maintain as much relevant information as possible however we need to keep our summary concise the limit is a single short paragraph okay something like this let's try and let's see what we get with that okay so message one again nothing to update see this so new summary you can see it's a bit shorter it doesn't have all those bullet points okay so that seems better let's see so you can see the first summary is a bit shorter but then as soon as we get to the second and third summaries the second summary is actually slightly longer than the third one okay so we're going to be we're going to be losing a bit of information in this case more than we were before but we're saving a ton of tokens so that's of course a good thing and of course we could keep going and adding many interactions here and we should see that this conversation summary will be it should maintain that sort of length of around one short paragraph so that is it for this chapter on conversation memory we've seen a few different memory types we've implemented their old deprecated versions so we can see what they were like and then we've re-implemented them for the latest versions of lang chain and to be honest using logic where we are getting much more into the weeds and that is in some ways okay it complicates things that is true but in other ways it gives us a ton of control so we can modify those memory types as we did with that final summary buffer memory type we can modify those to our liking which is incredibly useful when you're actually building applications for the real world so that is it for this chapter we'll move on to the next one that's it for this chapter we'll move on to the next chapter we'll see you next time