Back to Index

Memory Masterclass: Make Your AI Agents Remember What They Do! — Mark Bain, AIUS


Transcript

I'm super excited to be here with you. This is my first time speaking at AI Engineer. We have an amazing group of guest speakers, Vasilija Markovic from Cogni. Vasilija, oh, there is Vasilija, Daniel Chalev from Graffiti and ZPI and Alex Gilmore from Neo4j. The plan looks like this. I will do a very quick power talk about the topic that I'm super passionate.

The AI memory. Next, we'll have four live demos, and we'll move on to some new solution that we are proposing, a GraphRack chat arena that I will be able to demonstrate, and I would like you to follow along once it's being demonstrated. And at the very end, we'll have a very short Q&A session.

There is a Slack channel that I would like you to join. So, please scan the QR code right now before we begin. And let's make sure that everyone has access to these materials. There is a walkthrough shirt on the channel that will go through closer to the end of our workshop.

But I would like you to start setting it up, if you may, on your laptops if you want to follow along. All right. It's workshop-graphrackchat. You can also find it on Slack. And you can join the channel. So, a little bit about myself. So, hi, everyone. Again, I'm Mark Bain.

And I'm very passionate about the memory. What is memory? The deep physics and applications of memory across different technologies. You can find me at Mark and Bain on social media or on my website. And let me tell you a little bit of a story about myself. So, when I was 16 years old, I was very good at maths and I did math olympiads with many brilliant minds, including Wojciech Zaremba, the co-founder of OpenAI.

And thanks to that deep understanding of maths and physics, I did have many great opportunities to be exposed to the problem of AI memory. So, first of all, I would like to recall two conversations that I had with Wojciech and Ilya in 2014 in September. When I came here to study at Stanford, at one party, we met with Ilya and Wojciech, who back then worked at Google.

And they were kind of trying to pitch me that there will be a huge revolution in AI. And I kind of, like, followed that. I was a little bit unimpressed back then. Right now, I probably kind of take it as a very big excitement when when I look back to the times.

And I was really wishing good luck to the guys who were doing deep learning because back then, I didn't really see this prospect of GPUs giving that huge edge in compute. Ilya. However, during that conversation, it was like 20 minutes. At the very end, I asked Ilya, all right, so there is going to be a big AI revolution.

But how will these AI systems communicate with each other? And the answer was very perplexing and kind of sets the stage to what's happening right now. Ilya simply answered, I don't know. I think they will invent their own language. So that was 11 years ago. Fast forward to now.

The last two years I have spent doing very deep research on physics of AI. And kind of, like, dove into all of these most modern AI architectures, including attention, diffusion models, VAEs, and many other ones. And I realized that there is something critical. Something missing. And this power talk is about this missing thing.

So over the last two years, I kind of followed on on my last years of doing a lot of research in physics, computer science, information science. And I came to this conclusion that memory, AI memory, in fact, is any data in any format, and this is important, including code, algorithms, algorithms, algorithms, algorithms, and hardware, and any causal changes that affect them.

That was something very mind-blowing to reach that conclusion. And that conclusion sets the tone to this whole track, the graph track. the graph track. In fact, I was also perplexed by how biological systems use memory and how different cosmological structures or quantum structures, they, in fact, have a memory.

They kind of remember. And let's get back to maths and to physics and geometry. When I was doing science olympiads, I was really focused on two or three things. Geometry, trigonometry, and algebra. And I realized, in the last year, that more or less the volume of loss in physics per Loss in physics perfectly matches the volume of loss in mathematics.

And also, the constants in mathematics, if you really think deeply through geometry, they match the constants both in mathematics and in physics. And if you really think even deeper, they kind of like transcend over all the other disciplines. So that way we think a lot. And I found out that the principles that govern LLMs are the exact same principles that govern neuroscience.

And they are the exact same principles that govern mathematics. And I studied, I studied papers of Perlman. I don't know if you've heard who is Perlman. Perlman is this mathematician who refused to take a $1 million award for proving the -- for proving the -- for proving the -- one of the most important conjectures.

I studied about symmetries of three spheres. And once I realized that this deep math of spheres and circles is very much linked with how attention and diffusion models work, So basically the formulas that Perlman reached are linking entropy with curvature. And curvature, basically, if you think of curvature, it's a tension.

It's gravity. So in a sense, there are multiple disciplines where the same things are appearing multiple times. And I will be publishing a series of papers with some amazing Supervisors who are co-authors who are co-authors of two of these methods, methodologies. The transformers and VAEs. And I came to this realization that this equation governs everything.

It governs math, governs physics, governs our AI memory, governs neuroscience, biology, physics, chemistry, and so on and so forth. So I came to this equation that memory times compute would like to be a squared imaginary unit circle. If that existed ever, we would have perfect symmetries and we would kind of not exist.

Because for us to exist, this asymmetries needs to show up. And in a sense, every single LLM through weights and biases, the weights and biases, the weights are giving the structure, the compute that comes and transforms the data in sort of the raw format, the compute turns it into weights.

The weights are basically, if you take these billions of parameters, the weights are the sort of like matrix structure of how this data looks like when you really find relationships in the raw data. All right. And then there are these biases, these tiny shifts that are kind of like trying to like in a robust way adapt to this model so that it doesn't break apart but still is still is very well reflecting the reality.

So something is missing. So when we take weights and biases and we apply scaling clause and we keep adding more data, more compute, we kind of get a better and better and better understanding of the data. In a sense, if we have infinite data, we wouldn't have any biases.

And this understanding is again the principle of this track of GraphRack. The disappearance of biases is what we are looking for when we are scaling our models. So in a sense, the amount of memory and compute should be exactly the same. It's just slightly expressed in a different way.

But if there are some imbalances, then something important happens. And I came to another conclusion that our universe is basically a network database. It has a graph structure and it's a temporal structure. So it keeps on moving, following some certain principles and rules. And these principles and rules are not necessarily fuzzy.

They have to be fuzzy because otherwise everything would be completely predictable. But if it would be completely predictable, it means that me, myself, would know everything about every single of you about myself from the past and myself from the future. So in a sense, it's impossible. And that's why we have So in a sense, it's impossible.

And that's why we have this sort of heat, diffusion, entropy models. They allow us to exist. But something is preserved. Relationships. Relationships. Relationships. Relationships. Relationships. Relationships. Relationships. Any single symmetry that happens at the quantum level. Any single symmetry that happens at the quantum level. Any single symmetry that happens at the quantum level.

Any single symmetry that happens at the quantum level. Any single symmetry that happens at the quantum level. Any single symmetry that happens at the quantum level. Any single symmetry that happens at the quantum level. Any single tiny symmetry that happens at the quantum level. Any single tiny symmetry that happens at the quantum level.

Any single tiny symmetry that happens at the quantum level. Any single tiny symmetry that happens at the quantum level. So in a sense, it's impossible. So in a sense, it's impossible. And that's why we have this sort of like heat, diffusion, entropy models. So in a sense, it's impossible.

And that's why we have this sort of like heat, diffusion, entropy models. They allow us to exist. So in a sense, it's impossible. And that's why we have this sort of like heat, diffusion, entropy models. They allow us to exist. But something is preserved. Relationships. Relationships. Any single asymmetry that happens at the quantum level.

Any single tiny asymmetry that happens preserves causal links. And these causal links are the exact thing that I would like you to have as a takeaway from this workshop. The difference between simple rack, hybrid rack, any types of rack, and graph rack is that we are having the ability to keep these causal links in our memory systems.

Basically the relationships are what preserves causality. That's why we can solve hallucinations. That's why we can optimize hypothesis generation and testing. So we will be able to do amazing research in biosciences, chemical sciences, just because of understanding that this causality is preserved within the relationships. And these relationships, when there are these asymmetries that are needed, they kind of create this curvature, I would say.

So we intuitively feel every single of you is choosing some specific workshops and talks that you guys go to. Right now all of you are attending to the talk and workshop that we are giving. It means that it matters to you. And it means that potentially you see value.

And this value, this information is transcended through space and time. So it's very subjective to you or any other object. And I think we really need to understand this. So LLMs are basically these weights and biases or correlations that give us this opportunity to be fuzzy, you know, actually one thing that I learned from Wojciech 10, 8, 11 years ago was that hallucinations are the exact necessary thing to be able to solve a problem where you have too little memory or too little compute for the combinatorial space of the problem you're solving.

So you're basically imagining, you're taking some hypothesis based on your history and you're kind of trying to project it into the future. But you have too little memory, too little compute to do that, so it can be as good as the amount of memory and compute you have. So it means that the missing part is something that you kind of can curve thanks to all of these causal relationships and this fuzziness and reasoning is reading of this asymmetries.

And the causal links. Hence, I really believe that agentic systems are sort of the next big thing right now because they are following the network database principle. But to be causal, to recover this causality from our fuzziness, we need graph databases. We need causal relationships. And that's the major thing in this emerging trend of GraphRack that we are here to talk about.

And I would like to at this moment invite on stage our three amazing guest speakers. And I would like to start with Vasilya. Vasilya, please come over to the stage. Next will be Alex and Daniel. And I will present something myself. All right. So Vasilya will show us how to learn a search and optimize memory based on certain use case at hand.

All right. Let's just make sure this works. Okay. Nice to meet you all. And I'm Vasilya, I'm originally from Montenegro, a small country in the Balkans. This is beautiful, so if you want to go there, my cousins Igor and Milos are going to welcome you. Everyone knows everyone. So, you know, in case you're just curious about memory, I'm building a memory tool on top of the graph and vector databases.

My background is in business, big data engineering, and clinical psychology. So a lot what Mark talked about kind of connects to that. I'm going to show you a small demo here. The demo is to do a Mexican standoff between two developers, where we are analyzing their GitHub repositories. And these data from the GitHub depositories is in the graph, and this Mexican standoff means that we will let a crew of agents go, analyze, look at their data, and try to compare them against each other and give us a result that should represent who should we hire, let's say, ideally out of these two people.

So what we're seeing here currently is how Cognify works in the background. So Cognify is working by adding some data, turning that into a semantic graph, and then we can search it with a wide variety of options. We plugged in crew AI on top of it, so we can pretty much do this on the fly.

So here in the background, I have a client running. This client is connected to the system. So it's now currently searching the data sets and starting to build the graphs. So let's see. It takes a couple of seconds. But in the background, we are effectively ingesting the GitHub data from the GitHub API, building the semantic structure, and then letting the agents actually search it and make decisions on top of it.

So as every time with live demos, things might go wrong, so I have a video version in case this does. Let's see. And I'll switch to the -- oh, here we go. So the semantic graph started generating. And as you can see, we have activity log where the graph is being continuously updated on the fly.

Data is being stored in memory. And then data is being enriched, and the agents are going and making decisions on top. So what you can see here on the side is effectively the logic that is reading, writing, analyzing, and using all of this, let's say, preconfigured set of weights and benchmarks to analyze any person here.

So this is a framework that's modular. You can build these tasks. You can ingest from any type of a data source, 30-plus data sources supported now. You can build any type of a custom graph. You can build graphs from relational databases, semi-structured data, and we also have these memory association layers inspired by the cognitive science approach.

And then effectively, as we kind of build and enrich this graph on the fly, we see that it's getting bigger. It's getting more popular. And then we're storing the data back into the graph. So this is the stateful, temporal aspect of it. We can build the graph in a way that we can add the data back, that we can analyze these reports, that we can search them, and that we can let other agents access them on the fly.

The idea for us was let's have a place where agents can write and continuously add the data in. So I'll have a look at the graph now. So we can inspect it a bit. So if we click on any node, we can see the details about the commits, about the information from the developers, the PRs, whatever they did in the past, and which repos they contributed to.

And then at the end, as the graph is pretty much filled, we will see the final report kind of starting to come in. So let's see how far we got with this. So it's taking, it's preparing now the final output for the hiring decision task. So let's have a look at that when it gets loaded.

We just finished this this morning. I hope to have a hosted version for you all today, but didn't work through AI's causing some troubles. So let's, we have to resolve this one. So let's see. Yes. So I will just show you the video with the end so we don't wait for it.

So here you can see that towards the end, we can see the graph and we can see the final decision, which is a green node. And in the green node, we can see that we decided to hire Laszlo, our developer who has a PhD in graphs. So it's not really difficult to make that call.

And we see why and we see the numbers and the benchmarks. So thank you. Again, very fast three minute demo, so hope you enjoyed. And if you have some questions, I'm here afterwards. We have, we are open source. So happy to see new users and if you're interested, try it.

Thanks. Woohoo! Thank you. Thank you, Vasilija. Next up is Alex. So Vasilija showed us something I call semantic memory. So basically you take your data, you load it and cognify it, as they like to say. Come on, come on up, Alex. And that's the base. That's something we already are doing.

And next up is Alex will show us Neo4j MCP server. The stage is yours. Test, test, test, test, test, test, test, five, four, three, two, one, we're good. Okay. All right. So, hi everyone. My name's Alex. I'm an AI architect at Neo4j. I'm going to demo the memory MCP server that we have available.

So there is this walkthrough document that I have. We'll make this available in the Slack or by some means so that you can do this on your own. But it's pretty simple to set up. And what we're going to showcase today is really like the foundational functionality that we would like to see in a agentic memory sort of application.

Primarily, we're going to take a look at semantic memory in this MCP server, but we are currently developing it. And we're going to add additional memory types as well, which we'll discuss probably later on in the presentation. So in order to do this, we will need a Neo4j database.

Neo4j is a graph native database that we'll be using to store our knowledge graph that we're creating. They have an Aura option, which is hosted in the cloud, or we can just do this locally with the Neo4j desktop app. Additionally, we're going to do this via Claw desktop. And so we just need to download that.

And then we can just add this config to the MCP configuration file in Claw. And this will just connect to the Neo4j instance that you create. And what's happening here is we're going to -- Claw will pull down the memory server from PyPy and it will host it in the back end for us.

And then it will be able to use the tools that are accessible via the MCP server. And the final thing that we're going to do before we can actually have the conversation is we're just going to use this brief system prompt. And what this does is just ensure that we are properly recalling and then logging memories after each interaction that we have.

So with that, we can take a look at a conversation that I had in Claw desktop using this memory server. And so this is a conversation about starting an agentic AI memory company. And so we can see all these tool calls here. And so initially, we have nothing in our memory store, which is as expected.

Now, as we kind of progress through this conversation, we can see that at each interaction, it tries to recall memories that are related to the user prompt. And then at the end of this interaction, it will create new entities in our knowledge graph and relationships. And so in this case, an entity is going to have a name, a type, and then a list of observations.

And these are just facts that we know about this entity. And this is what is going to be updated as we learn more. In terms of the relationships, these are just identifying how these entities relate to one another. And this is really the core piece of why using a graph database as sort of the context layer is so important because we can identify how these entities are actually related to each other.

It provides a very rich context. And so as this goes on, we can see that we have quite a few interactions. We are adding observations, creating more entities. And at the very end here, we can see we have quite a lengthy conversation. We can say, let's review what we have so far.

And so we can read the entire knowledge graph back as context, and Claude can then summarize that for us. And so we have all of the entities we found, all the relationships that we've identified, and all the facts that we know about these entities based on our conversation. And so this provides a nice review of what we discussed about this company and our ideas about how to create it.

Now we can also go into Neo4j browser. This is available both in Aura and local, and we can actually visualize this knowledge graph. We can see that we discussed Neo4j, we discussed MCP, and Lang graph. And if we click on one of these nodes, we can see that there is a list of observations that we have.

And this is all of the information that we've tracked throughout that conversation. And so it's important to know that even though this knowledge graph was created with a single conversation, we can also take this and use it in additional conversations. We can use this knowledge graph with other clients such as cursor IDE or Windsurf.

And so this is really a powerful way to create a memory layer for all of your applications. And so with that, I'll pass it on. Thank you. All right. Give a round of applause to Alex. Thank you, Alex. The next up is Daniel. I will just assure personal beliefs about MCPs.

I was testing MCPs of Neo4j, Graffiti, Cogni, Memzear just before the workshop. And I'm a strong believer that this is our future. We'll have to work on that. And in a second, I will be showing a mini graph rack chat arena. And next up, something very, very important that Daniel does is temporal graphs.

Daniel is co-founder of Graffiti and Zepp. They have 10,000 stars on GitHub and growing very fast. The stage is yours, Daniel. Please show us what you do. Thank you. So 5, 4, 3, 2, 1. Did that work? It seems to have, right? So I'm here today to tell you that there's no one-size-fits-all memory.

And why you need to model your memory after your business domain. So if you saw me a little bit earlier and I was talking about Graffiti, Zepp's open source temporal graph framework, you might have seen me just speak to how you can build custom entities and edges in the Graffiti graph for your particular business domain.

So business objects from your business domain. What I'm going to demo today is actually how Zepp implements that and how easy it is to use from Python, TypeScript, or Go. And what we've done here is we've solved a fundamental problem plaguing memory. And we're enabling developers to build out memory that is far more cogent and capable for many different use cases.

So I'm going to just show you a quick example of where things go really wrong. So many of you might have used ChatGPT before. It generates facts about you in memory. And you might have noticed that it really struggles with relevance. Sometimes it just pulls out all sorts of arbitrary facts about you.

And unfortunately, when you store arbitrary facts and retrieve them as memory, you get inaccurate responses or hallucinations. And the same problem happens when you're building your own agents. So here we go. We have an example media assistant. And it should remember things about jazz music, NPR podcasts, the daily, et cetera, all the things that I like to listen to.

But unfortunately, because I'm in conversation with the agent or it's picking up my voice when I'm, you know, it's a voice agent, it's learning all sorts of irrelevant things, like I wake up at 7:00 a.m., my dog's name is Melody, et cetera. And the point here is that irrelevant facts pollute memory.

They're not specific to the media player business domain. And so the technical reality here is as well that many frameworks take this really simplistic approach to generating facts. If you're using a framework that has memory capabilities, agent framework, it's generating facts and throwing it into a vector database. And unfortunately, the facts dumped into the vector database or Redis mean that when you're recalling that memory, it's difficult to differentiate what should be returned.

We're going to return what is semantically similar. And here we have a bunch of facts that are semantically similar to my request for my favorite tunes. We have some good things. And unfortunately, Melody is there as well, because Melody is a dog named Melody. And that might be something to do with tunes.

And so a bunch of irrelevant stuff. So basically, semantic similarity is not business relevance. And this is not unexpected. I was speaking a little bit earlier about how vectors and are just basically projections into an embedding space. There's no causal or relational relations between them. And so we need a solution.

We need domain-aware memory, not better semantic search. So with that, I am going to, unfortunately, be showing you a video because the Wi-Fi has been absolutely terrible. And let me bring up the video. Okay, so I built a little application here. And it is a finance coach. And I've told it I want to buy a house.

And it's asking me, well, how much do I earn a year? It's asking me about what student loan debt I might have. And you'll see that on the right-hand side, what is stored in Zepp's memory are some very explicit business objects. We have financial goals, debts, income sources, et cetera.

These are defined by the developer. And they're defined in a way which is really simple to understand. We can use Pydantic or Zod or GoStructs. And we can apply business rules. So let's go take a look at some of the code here. We have a TypeScript financial goal schema using Zepp's underlying SDK.

We can define these entity types. We can give a description to the entity type. We can even define fields, the business rules for those fields, the values that they take on. And then we can build tools for our agent to retrieve a financial snapshot which runs multiple Zepp searches at the same time concurrently and filters by specific node types.

And when we start our Zepp application, what we're going to do is we're going to register these particular objects with Zepp so it knows to build this ontology in the graph. So let's do a quick little addition here. I'm going to say that I have $5,000 a month rent.

I think it's rent. And in a few seconds, we see that Zepp's already parsed that new message and has captured that $5,000. And we can go look at the graph. This is the Zepp front end. And we can see the knowledge graph for this user has got a debt account entity.

It's got fields on it that we've defined as a developer. And so again, we can really get really tight about what we retrieve from Zepp by filtering. Okay. So we're at time. So just very quickly, we wrote a paper about how all of this works. You can get to it by that link below and appreciate your time today.

You can look me up afterwards. Great paper, really. All right. So once I'm getting ready, I would appreciate if you confirm with me whether you have access to Slack. Is the Slack working for you, the Slack channel? All right. I think we're slowly running out of time. So I'd appreciate if you have any questions to any of the speakers.

Please write these questions on Slack. And we will be outside of this room. And we are happy to answer more of these questions just after the workshop. I right now move on with a use case that I developed and to this GraphRack chat arena. To be specific, before delving into agentic memory, into knowledge graphs, I led a private cyber security lab and worked for defence clients.

A very big client with very serious problems on the security side. And I used to -- in one project, I had to navigate between something like 27, 29 different terminals and shells. And it requires knowing lots of languages. Like if you think of different Linux distros, every firewall and networking devices usually has its own shell, proprietary often.

There is power shell. So you need to know lots of languages to communicate with these machines to work with such clients. And I realized that LLMs are not only amazing to translate these languages, but they are also very good to kind of create a new type of shell, a human language shell.

There are such shells. But such shells, they would really be excellent if they have episodic memory, the sort of temporal memory of what was happening in this shell historically. And if we have access to this temporal history, the events, we kind of know what the users were doing, what their behaviours are.

We kind of can control every single code execution function that's running, including the ones of agents. So I spotted with some investors and advisers of mine, I spotted a niche. Something we call agentic firewall. And I wanted to do a super quick demo of how it would work. So basically, you would run commands and type pwd.

And in a sense, I suppose lots of us had computer science classes or we worked in shell. And we have to remember all of these commands, like, show me running Docker containers. Like, it's docker ps, right? But if you go for more advanced commands, I think it's for a reason, yeah, I think it's for a reason.

All right, it's there. Okay, thank you. In general, I would need to know right now some command that can extract me, for instance, the name of the container that's running and its status. Show me just image and status. I can make mistakes, like human language fuzzy mistakes. So if Apache is running.

All right. All right. All right. Show the command we did three commands ago. So basically, if you plug in the agentic -- if you plug in the agentic memory to things like that, I think it got it wrong, but you get me right. So if I get through, like, different shells and terminals, and I have this textual context of what was done and the context of the certain machine of what is happening here.

And it kind of spans across all the user -- all the machines, all the users, and all the sessions in PTYs, TTYs, I think that we can really have a very good context also for security. So that space, the temporal logs, the episodic logs, is something that I see will boom and emerge.

So I believe that all of our agents that will be executing code in terminals will be executing it through -- maybe not all, but the ones that are running on the enterprise gate. They will be going through agentic firewalls. I'm close to sure about that. So that's my use case.

And now let's move on to GraphRack chat arena. So you have on Slack a link to this doc. And this doc is allowing you to set up a repo that we've created for this workshop. And we'll be promoting it afterwards. So about a year ago, I met with Jerry Liu from LamaLindex and we were chatting quite a while about how to evolve this conversational memory.

And he gave me two pieces of advice. One of them, think about data abstractions. The other, think about evals. Data abstractions, I kind of quickly solved within like two months. Evals, I realized that there won't be any evals in form of a benchmark. All of these hot potatoes and all of that, it's fun.

I know that there are great papers written by our guest speakers and other folks about hot potatoes. But it's not the thing. You can't do a benchmark for a thing that doesn't exist. Basically, the agentic GraphRack memory will be this type of memory that evolves. So you don't know what will evolve.

So if you don't know what will evolve, you will need a simulation arena. And that will be the only right evil. So one year, fast forward, and we've created a prototype of such agentic memory arena. Think about it like web arena, but for memory. And let me quickly show you that.

You can go to this repository. I did a fork of that. There is Memzero. There is Graffiti. There is Cogni. And there will be two approaches. One approach will be sort of the repo, the library itself, and the other is through MCPs. Because we don't really know what will work out better.

So whether repos or the MCPs will work out better. So we need to test these different approaches. So we need to create this arena for that. So we basically clone that repo. And we use ADK for that. So we get this nice chat where you can talk to these agents.

And you can switch between agents. So I want to talk with Neo. And there is a Neo4j agent running behind the scenes. There is a Cypher graph agent running behind the scenes. And I can kind of for now switch between these agents. Maybe I'll increase the font size a little bit.

So the Neo agent is basically answering the questions about this amazing technology, the graphs, specifically Neo4j. And I can switch to Cypher. And then an agent that is excellent at running Cypher queries talks with me. And I'm writing add to graph that I mark and I'm passionate about memory architectures.

And basically what it does is it runs these layers that are created by Cogni, by Memzero, by Graffiti, and all the other vendors of semantic and temporal memory solutions. Or specifically created by an MCP server that Alex was demonstrating, the Neo4j MCP server. So I'm really looking forward to how this technology evolves.

But what I quickly wanted to show you is that it already works. It has this science of being this agentic memory arena. So I can ask my graph through questions, and the agent goes to the connection. This is just one - you know what's amazing? It's just one Neo4j graph.

It's just one Neo4j graph on the backend, and all of these technologies that can be tested. How the graphs are being created and retrieved. It's like - when I think of that, it's like the most brilliant idea that we can do with agentic memory simulations. So I get answers from the graph.

Here is the graph. I can basically rerun the commands to see what's happening on this graph. And let me just move on. And next thing is, I would like to add to the graph that Vasilio will show how to integrate Cogni and ta-da-da-da. So I add new information. And the cipher writes it to the graph.

And then I want to do something else. It's super early stage still. But then I transfer it to graffiti, and I can repeat the exact same process. So I can right now, using graffiti, search what I just added. And I can switch between these different memory solutions. So that's why I'm so excited about that.

And we do not have time to practice it together, do the workshop, but I'm sure we will write some articles. So please follow us. And I would appreciate, if you have any questions, pass them on to Slack. I will ask Andreas whether we have time for a short Q&A.

Or we need to move it to the breakout or outside of the room. It would take, like, five minutes? Five minutes. All right. So that's all for now, for today. I really would like Vasilio, Daniel, and Alex to come back to stage so you can ask any of us, please direct the questions to any of us, and we'll try to answer them.

Yeah. Let's go. Hi. I'm Lucas. I want to ask a fundamental question. How do you decide what is a bad memory over time? Because you could, like, as a developer and as a person, we evolve the line of thought. So one thing that you thought was good, like, three years, ten years ago, may not be good right today.

So how do you decide? Sure. A very good question. So I will answer in -- maybe you guys can help. I will answer in a very scientific way. So basically the one that causes a lot of noise. The noisy one doesn't make a lot of sense. So you decrease noise by redundancy and by relationships.

So the less relationships and the more noisiness, the -- so in a sense, a not well-connected node has the potential of not being correct, but there are other ways to validate that. Would you like to follow on? Yeah, sure. A practical way, we let you model the data with Pydantix so you can kind of blow the data you need and add weights to the edges and nodes.

So you can do something like temporal weighting, you can add your custom, let's say, logic and then effectively you would know how your data is kind of evolving in time and how it's becoming less or more relevant and what is the set of algorithms you would need to apply.

So this is the idea, not solve it for you, but help you solve it with tooling. But yeah, there is -- depends on the use case, I would say. Yeah. I have nothing to add. I think that's a great explanation. I think what I would add is that there is missing causal links.

Missing causal links is what is most probably a good indicator of fuzzy nice. Yeah. Next question. Can you hear me? How would you embed in security or privacy into the network or the application layer? If there's a corporate, they have top secret data, or I have personal data that is a graph, I want to share that, but not all of it.

Oh, that's a really good one. I think I'll answer that very briefly. So basically, you do have to have that context. You do have to have these decisions, intentions of colonels, of majors, and anyone like in the enterprise -- like CISOs and anyone in the enterprise stack. And in a sense, it also gets kind of like fuzzy and complex, so I expect this to be a very big challenge.

That's why I want to work on that. But I'm sure that applying ontologies, the right ontologies, first of all, to this enterprise cybersecurity stack really kind of provides these guardrails for navigating this challenging problem and decreasing this fuzziness and errors. Thank you. Yeah. I would also just add, like, all these applications are built on Neo4j.

And so in Neo4j, you can, like, do role-based access controls, and so you can prevent users from accessing data that they're not allowed to see. So it's something that you can configure with that. And one more question. Hi. This question is for Mark. Yeah. Yeah. Go on. Go on.

Go on. Go on. You were about to say something? Please go ahead first. Yeah. Just one thing. Like, we also noticed that if you isolate a graph per user or kind of keep it, like, very physically separate, for us, it really works well. People react to that really well.

So that's one way. Yes. Independent graphs. Personal graphs. Yeah. Mark, in your earlier presentation, you mentioned this equation that relates to gravity, entropy, and something, and also memory and compute. Yes. Could you show those two again and explain them again? Of course. Yeah. If we have time. Other than that, it's probably for a series of papers to properly explain that.

So that's one. Memory times compute equals size square. The other one is that if you take all the attention, diffusion, and VAs, which are doing the smoothing, it preserves the sort of asymmetries. So very briefly speaking, let's set up the vocabulary. So first of all, curvature equals attention equals gravity.

This is the very simple, most important principle here. I will need to, when writing these papers, we are really tightly trying to define these three. Next. Diffusion, heat, entropy. Next one. It's the exact same thing. We just need to align definitions. And if it's not the exact same thing, if there are other definitions, we need to show what's really different.

And now, if you think about attention, it kind of shows the sort of, like, pathways towards certain asymmetries. If you take a sphere, if you start bending that sphere and make it like, you know, you kind of try to extend it, two things happen, entropy increases and curvature increases, in a sense.

And Perlman, what he did, he proved that you can, like, bend these spheres in any way, 3D spheres. 4D and 5D and higher level spheres were already solved. So he solved for a 3D sphere. And these equations are proving that basically there won't be any other architectures for LLMs.

It will be just attention, diffusion models, and VAs. Maybe not just VAs, but, like, kind of, like, something that smooths -- leaves room for biases. All right. Thank you, all. I really appreciate you coming. I hope it was helpful. Thank you, the guest speakers. And we'll answer the questions outside of the room.

I appreciate that. Thank you. Thank you. Thank you. We'll see you next time.