back to indexMCPs are Boring (or: Why we are losing the Sparkle of LLMs) - Manuel Odendahl

00:00:00.980 |
I'm Manuel and I'm going to be talking about MCP is boring or why we're losing the sparkle of LLMs. 00:00:08.620 |
So just a little bit about me. I'm Manuel. I'm Program with AI on Twitter. 00:00:14.400 |
My GitHub's Vazen and I've been a software engineer for 25 years. 00:00:19.360 |
Probably a veteran coder at this point, I guess. 00:00:21.560 |
And I've been writing a lot of common lists and I'm bringing that up because it's going to be relevant for this talk. 00:00:27.860 |
But I've been an embedded engineer. I love search engine. I love databases. I love back-end coding. 00:00:34.540 |
I've all the coding, basically. It's like boilerplate and back-end and those kinds of things. 00:00:40.960 |
I've been coding with LLM since 2022, since the alpha of Copilot, I think. 00:00:45.940 |
And then when ChatGPT came out and decided to do all my coding with LLM, I've been pretty much obsessed with it since then. 00:00:53.080 |
That's my program with AI Twitter account, where I share all my tips and all the things I've discovered. 00:01:00.160 |
And we've been using tool calling in LLMs for, I guess, it came out like two years ago. 00:01:10.420 |
So what that means, we've written a lot about tools. 00:01:13.940 |
So per se, LLMs know a lot about calling tools. 00:01:18.000 |
And I mean, we've written a lot about tools using in our literature, in our code, and they've been reinforced. 00:01:23.760 |
So we have much content to talk about tool calling. 00:01:28.360 |
And what that means is that LLMs can produce language that calls tools in structured format is what we usually do. 00:01:36.920 |
And then we map the structured format that comes out to just basically call a function, call an API, call an MCP server. 00:01:45.280 |
And it's magical, the tech come back, we don't really need a schema, the LLM can continue working on it, can answer questions about it. 00:01:52.080 |
And what that means is that now we can, instead of writing code, having to parse schemas, having to validate them, having to be really careful, we can just talk to the machine, say like, please, you know, please call the API and then give me that information. 00:02:09.220 |
I think everybody here uses agents and knows pretty much what I mean. 00:02:16.340 |
I think this is something that is fairly straightforward. 00:02:20.140 |
It's the standard example of what's the weather like in San Francisco. 00:02:23.080 |
The assistant can do some chain of thought, for example, and checks what tools it has available, which are provided as a schema in the context that's being sent to the LLM. 00:02:33.460 |
And it basically says in a special token format, these are your tools, this is how you call them, this is what it means to call them. 00:02:39.740 |
And the LLM then decides that, you know, getting the current weather might be a pretty good tool to use to get the weather from in San Francisco, right? 00:02:49.940 |
And the tool then, which is a deterministic piece of code, is going to call whatever weather API you have, bet your current conditions, return that as JSON maybe. 00:03:00.120 |
And the LLM receiving this JSON does a new inference and outputs an answer to the user's request. 00:03:09.800 |
Well, there's no magic in how it works, but as soon as you've wired up a few tools, especially your own tools, you realize how magical that is. 00:03:20.960 |
But great, we have MCP now, which allows any kind of LLM app to interact with tools that other people have built. 00:03:28.120 |
So you can download an MCP to interact with GitHub, you can download one to interact with Blender, you can download one to interact with your own files, you can download one to interact with the temperature in your room if you have built such an MCP. 00:03:41.800 |
That's absolutely mind-boggling possibilities. 00:03:44.320 |
The problem is, when you give an MCP too many tools, right? 00:03:50.760 |
So I'm just going through, like, we, it's absolutely magic, it's amazing. 00:03:54.480 |
But if you give an LLM too many tools, if you give it, like, a hundred tools, it's going to be, well, should I call get weather, or should I call, like, search internet for weather in San Francisco? 00:04:04.540 |
So it's kind of hard to decide, and the LLM, like, ends up calling the wrong tool, or using the wrong schema, or, like, doesn't really understand what parameters to give to a certain tool. 00:04:19.040 |
And the other, the other problem is that when you call a tool, that means the arguments that the LLM is going to give that tool, and then per induction, those that we are going to provide to that tool have to be inferenced by the LLM. 00:04:33.640 |
So if we, if we want to call the tool and say, like, you know, we have this, like, fragment of document, please search for something similar somewhere else, you have to emit this part of document, which probably is already in the context higher above anyway, and repeat that to give it to the tool, which is pretty expensive, it's pretty slow. 00:04:55.320 |
I think you all know that, where it's going to be, like, spinny, spinny, spinny, spinny, spinny. 00:05:00.820 |
And actually, it's just copy-pasting something that's literally just above. 00:05:05.040 |
If you, if you go back to the San Francisco example, it's like a lower, less impressive example of that, but it repeats the San Francisco, right? 00:05:17.520 |
So the problem is that all the tokens that come back from that tool also have to be put in the LLM. 00:05:23.100 |
So if you do something like getting the weather and you only want the wind speed, you'll still have the whole JSON that gives you the date and gives you all kinds of information about the sun and the moon, and, but you actually only care about, like, a single number. 00:05:40.320 |
So you've wasted 2,000 tokens on something that could have been answered much more quickly. 00:05:48.220 |
Is, imagine if you have a CRM and you ask for the contact information of OpenAI, what it's going to do is going to call its tool that's called GetCRM Companies. 00:05:59.260 |
And this is a massive response because there's 36 companies in your database. 00:06:04.240 |
So you end up having this like insane list of companies. 00:06:07.780 |
And then at the end, you basically just look at the email of OpenAI and you could say, oh, bad tool, right? 00:06:16.080 |
Like get a tool that's called GetCRM Company. 00:06:18.740 |
But then if you misspell OpenAI, maybe that won't work, right? 00:06:21.840 |
Like you have all these like little, you're starting to, you're starting to optimize for certain case where you like only want one company to maybe only want the contact information for the company. 00:06:31.040 |
So you add a flag, like contact is equal to true. 00:06:33.560 |
And you end up with like something crazy, like GraphQL at the end of the day. 00:06:37.220 |
If you want to cover all your bases and then the model is going to create wrong GraphQL and it's going to go downhill. 00:06:48.420 |
It's great to have all the information of the companies because suddenly I can ask questions like who's in San Francisco and it's going to be able to answer these like fuzzy queries that I maybe never asked for. 00:07:00.780 |
The problem is you end up having 20,000 tokens. 00:07:06.600 |
You're waiting for five minutes before you get the answer to your query. 00:07:13.880 |
So we can engineer around it, but really what we're doing is why are MCPs basically so restricted? 00:07:25.360 |
Like we are not leveraging all the things we could actually do with LLM, with code generated by LLM, with a coding, the tool calling around it. 00:07:33.920 |
For example, why don't we pass the chat history along with LLM calls? 00:07:39.160 |
The LLM call can decide to, the tool can just decide to discard it, right? 00:07:44.080 |
But we already have all this information in the context of an LLM application. 00:07:49.000 |
Why not also pass it to the tool call so that we can say, you know, we've called this thing like 15 times, just reuse the arguments from before, for example, or use this as, you know, a search query. 00:08:06.440 |
It's like, you know, please just give me the chat history that is present as metadata to your tool call. 00:08:12.360 |
You could also give me your memories if you have an LLM application that has memories and you have persistent knowledge. 00:08:18.580 |
Like, why not give that to my tool to say like, well, you have already called me 15 times. 00:08:24.480 |
Then I can just call get weather and get weather will look at my memories and say like, oh, there's a location memory. 00:08:31.740 |
Pretty easy extension that allows you to do so, so, so much more at a very, very low cost. 00:08:39.140 |
And another thing is like, if you can attach files in your LLM application, like why not pass these files along? 00:08:44.220 |
Or at least they're pass, like at least they're metadata. 00:08:49.800 |
And the problem is also there's many, many, many different LLM apps that all have different modalities. 00:08:56.340 |
So suddenly you have to design a protocol that covers all these different cases. 00:09:03.800 |
Maybe I have an application where people can like draw little images. 00:09:07.240 |
So suddenly how do I pass that as an attached file? 00:09:13.760 |
Lots of questions, a lot of boring engineering. 00:09:17.120 |
We've done that in the 90s and the 2000s with ontologies and semantic web and graphs and triplets and XML schemas. 00:09:28.820 |
But one very easy thing we could also do is, you know, before you call the tool, why not give the user a little UI on what they want to attach and how they want to attach and maybe edit the arguments? 00:09:45.880 |
Very, very easy to at least, you know, if I have the LLM write a whole tool call and it opens up and it says like, well, I'm searching for the company, like open AI and it's misspelled. 00:09:57.000 |
Then the user can go in and say like, this is open AI, it's different. 00:10:01.760 |
Or maybe it's like suddenly it's searching for Oracle for some reason. 00:10:04.480 |
You can say like, no, no, no, this is not the tool call that I want. 00:10:07.140 |
Currently, the only interaction we have is like allow, which is boring, is square. 00:10:14.040 |
The next thing we can do is like, well, let the user edit the tool result before we paste all of this stuff back, right? 00:10:19.640 |
Like if I get 30 pages of results as a user, it's actually faster for me to just edit down these results. 00:10:26.340 |
And maybe the UI is even nice to do it instead of waiting 10 minutes and spending 50 cents to have the LLM do it. 00:10:32.820 |
So to show you an example of what that can look like is you can say, I want to find all the customers with overdue invoices, right? 00:10:48.600 |
So it's able to write SQL queries and it's going to have this query CRM thing with a filter. 00:10:54.100 |
And before it gets called, we have this little approve reject, which we know, and a little edit dialogue, which in here is a very raw way of editing it. 00:11:04.760 |
But it allows you to edit the arguments before they go further. 00:11:12.920 |
So suddenly when the call comes back, you can actually edit the results as well and tune your query or like realize, oh, it's the wrong call. 00:11:27.500 |
I don't need the LLM to do that kind of stuff, right? 00:11:29.820 |
And that to me, especially when I'm doing a lot of, you know, database queries, those kinds of things. 00:11:40.060 |
If I do the wrong query and I have the only the option allow and I get back 10,000 things, I'm like, damn, I'm like messed up. 00:11:47.840 |
And if it's an agent thing, I have to rewind a whole agent run instead of being able to edit this one tool. 00:11:53.460 |
I've got a whole set of thoughts around context editing, but this goes beyond the scope of this talk. 00:12:01.620 |
And you're welcome to come talk to me to get more info. 00:12:14.080 |
They've been trained under every word under the sun. 00:12:21.880 |
Like, those are even boring things you can do if you look at everything like AI red teams are doing. 00:12:27.340 |
And if you're going to starting to talk to it like a terminal, it's a terminal. 00:12:30.380 |
Like, basically, everything you tell the LLM is going to be what the LLM is going to pretend to be. 00:12:35.260 |
So this gives us, like, a lot of leverage because they've been trained so hard and really reinforced to learn about code, right? 00:12:47.920 |
You can call, like, 15,000 APIs that have been, like, recently built. 00:12:53.000 |
Like, Sonnet 4 is amazing at knowing stuff that was just, like, built two months ago. 00:12:57.480 |
It one-shots things that I never thought would be possible before. 00:13:01.360 |
So why are we stuck with, like, tools that don't even work that well, right? 00:13:10.140 |
Why are they so bad at function calling when, at the same time, they can generate code that is so much better? 00:13:22.080 |
There's many things that are called, like, even if I have just a SQL tool, I'm basically giving it code because a SQL query is code. 00:13:28.660 |
If I have an edit file tool, I can give it code. 00:13:32.160 |
And you can write code that calls tools, that calls code, that runs an agent. 00:13:36.580 |
You can do all this, like, infinite recursion stuff at the inference time. 00:13:41.060 |
And basically, you can tell an LLM, please create the tool that I want, right? 00:13:49.940 |
If I give it a database schema, I can say, like, well, please create the tool, get company contact. 00:13:57.600 |
I can probably do it with, like, a 3 billion model, 3 billion parameter model locally. 00:14:03.900 |
And so there are these kind of, like, magical genies that can just, like, create whatever you want at the moment you need it in the way you want it and modify it, right? 00:14:12.680 |
So why don't we leverage that instead of being stuck with this, like, you can only call functions. 00:14:17.960 |
You can only call functions with this schema that we've given you, and it's, like, static, and you can't even modify it. 00:14:23.620 |
So the only prompt engineering you actually kind of need to do agents is, like, write the code to do X, right? 00:14:31.820 |
I haven't linked them, but there's the Voyager paper from two years ago already. 00:14:37.480 |
And there's the, I think, code elicits better tool actions, something like that, which is a very short paper that basically says, like, you know, just, like, write code to do tools. 00:14:48.200 |
And I've been on the LLM stuff pretty early to do code, like, once the instruct versions came out. 00:14:54.340 |
And writing my little tools was the first thing I did. 00:14:59.700 |
I was, like, I want to write a shell script to do this X, Y, Z, and I want to do the shell script to do X, Y, Z. 00:15:04.700 |
I would run it, I would paste the result back into ChatGPT, which is basically, like, I'm the tool caller at that point. 00:15:14.620 |
So the whole time before I read these papers, before MCP came out, before tool calling came out, I was, like, generating these shell scripts or generating little applications to do these kinds of queries, of generating SQL queries that would push back. 00:15:27.440 |
And so it was already crappy back then, but it hasn't really gotten significantly better, right? 00:15:32.620 |
Like, it feels that tool calling is still stuck at this, like, GPT-3.5 kind of intelligence. 00:15:38.000 |
I don't know why, but they're not that great. 00:15:41.580 |
However, they're so good at writing code that the only MCP I think I need is, like, eval, right? 00:15:48.720 |
Instead of copy-pasting things, putting it in the shell, copy-pasting it back, all I need is, like, eval around it. 00:15:57.020 |
Every coding agent does most of its work with actually bash calls. 00:16:00.380 |
They call grep, they call find, they call ls, they call sed when they're, like, struggling with their edit file. 00:16:05.780 |
Or, right, it's, like, just editing a file, they actually don't know how to do it. 00:16:09.360 |
And when they really fail at calling their tool, they're just like, ah, fuck it, I'm going to write sed code. 00:16:17.780 |
So you can realize, like, why do we even need MCPs when we just have eval? 00:16:24.960 |
And so here's an example where eval is actually SQL, right? 00:16:28.220 |
And I just ask it, like, how many orders did customer John Smith placed the last month? 00:16:32.420 |
And I don't even tell it what the database is. 00:16:35.140 |
I just say, like, you have a SQL evaluation tool. 00:16:37.600 |
So the first thing it's going to do is going to be, like, well, what tables do I have, right? 00:16:42.680 |
And it says, like, oh, well, I found customers and orders, so, you know, let me look at their schema. 00:16:47.100 |
And it says, like, oh, well, customers, you know, they have, like, a field called name. 00:16:51.040 |
And then there's, like, a field customer ID in the table orders with a date. 00:16:55.660 |
Like, yeah, I know how to do a SQL query to do that, right? 00:17:02.240 |
It looks like it's just schema of customers and orders and says, like, oh, look, there's customer ID. 00:17:06.640 |
And now it can just write a SQL query that does the result of the orders, the join that it needs with the aggregation that it needs and just returns the result. 00:17:16.140 |
And the crazy thing here is, like, if you had a thing that's called, like, orders for a customer placed last month, maybe the LLM won't realize that actually it has to pass, you know, first name, last name, for example. 00:17:29.900 |
So the first tool call fails, and then it, like, repeats its tool call and says, like, well, oh, the date format is wrong. 00:17:37.260 |
So it tries with minus 30 days, and then suddenly it gets, like, a huge table with all the invoice items. 00:17:42.680 |
And it's like, okay, cool, now I have the information. 00:17:44.420 |
I'm going to aggregate as an LLM, get the wrong number because they can't do addition. 00:17:49.200 |
And you wasted 5,000 tokens, and you get, like, a wrong response. 00:17:53.660 |
While this actually probably takes, like, you know, 500 tokens, and you get a deterministic, repeatable kind of query that can you reuse, right? 00:18:02.460 |
So this is why eval is such a nice tool, is that, oops, why did this go so fast? 00:18:09.560 |
But if we take it to the next step, it's like, once it works, you know, why not store this query? 00:18:14.360 |
Why not say, like, oh, now we have, like, a get customer order amount query. 00:18:20.640 |
And so the way you can do that, for example, with SQL, is that you can just create a view to do it. 00:18:25.760 |
And then suddenly you don't even need to look at the tables. 00:18:29.920 |
You just do, like, oh, selects amount from view. 00:18:32.640 |
And so this is what it looks like, is this thing is going to run. 00:18:39.720 |
Like, maybe it even needs to do things in sequence where it's, like, going to be, like, oh, I'm going to select the orders. 00:18:43.840 |
And I see, like, oh, okay, I have to join this table. 00:18:47.680 |
And then I have to do this as this complicated code. 00:18:50.580 |
But then being good coders, it knows how to turn, like, 15 queries into a single view. 00:18:56.580 |
And at that point, this looks like not the example of the view that I was trying to show. 00:19:09.800 |
But that shows you that you can easily create tools and functions and views and whatever. 00:19:17.720 |
And so when you create a lot of functions and views and make them nice to reuse, that's called a library. 00:19:23.980 |
So instead of, like, exposing a GitHub tool, you just say, like, well, your eval now has access to the GitHub library. 00:19:31.540 |
You don't need to deal with, like, OLS tokens and whatever. 00:19:33.760 |
You just have, like, this whole API to do interactions with GitHub. 00:19:40.500 |
And funnily is, like, if you put 10,000 functions into an API, the LLM actually knows how to use them. 00:19:48.200 |
Which, I don't know why they get so bad once you add tool calls. 00:19:58.460 |
Like, all these neural models have been trained to be a little bit better at tool calling. 00:20:01.540 |
But you still, like, very, very quickly run into, like, weird things where it doesn't understand the parameters. 00:20:06.900 |
However, they're so good at writing code these days that I rarely have to fix anything in code, even for my own libraries, right? 00:20:16.620 |
Like, I just point them at my set of functions in a header file, and then it works. 00:20:23.320 |
So, why, instead of doing the CRM MCP, you know, just do, like, import, start from CRM, and then you're done. 00:20:30.040 |
You can, not only do you have all the tools that you used to have as an MCP, but now you can create your own tools that are really rich. 00:20:36.820 |
So, you know, just build the GetCompanyInfo tool. 00:20:42.280 |
Because you don't need to build an MCP GetCompanyInfo tool, because you can actually just generate the code to call CRMListInfo. 00:20:53.000 |
And then you have, like, you know, it's able to put a for loop around it. 00:21:00.580 |
And so, if you think a step further, is that these tools and the code that's generated is, like, not just for the LLM, but, like, a lot of it is for us as well. 00:21:12.740 |
So, why don't we use the fact that we can use code now to build tools that are much richer than just a function call with, like, a little JSON window that you have to click in to edit it, if you can even edit it, but instead have tools that build UIs for us, right? 00:21:27.580 |
So, instead of just incorporating a JavaScript interpreter that has, like, under the hood access to libraries, but then still on the surface just calls, like, functions, why not have something in the LLM host application that allows us to do UIs very easily? 00:21:44.420 |
And so, I've built a couple of prototypes around it. 00:21:47.240 |
But just to show you what this would look like, if we go back to editing, you know, the tool input and the tool output, if you give the LLM the opportunity to say, like, well, if the user wants to edit my input, you know, give it, like, a good UI instead of a little text window where you can edit JSON. 00:22:02.440 |
And so, this could, if we go back to the previous example, it could, like, output some kind of UI DSL that's rendered by the LLM host. 00:22:10.300 |
And suddenly, instead of having to edit, like, JSON fields, you get a slider, you get, like, dropdowns, you get all kinds of things, which the user can validate or tweak or, you know, attach a file, say, like, no, I don't want it to know about my memories, and then call the tool. 00:22:24.940 |
And maybe there's, like, preferences you can save, like, there's all kinds of things you can do around it, right? 00:22:30.160 |
And then, similarly, you can have a UI that allows you to edit the output with, like, maybe a scroll view and a filter, and you can say, like, well, remove this, add this, the LLM should know about this, recall the tool by modifying my previous inputs, and you get this, like, rich UI to do your work. 00:22:45.140 |
And so, what I built, and I can show you that live, is, like, a very simple MCP, which has a sandbox JS, it's written in Go, and it has two libraries. 00:22:56.760 |
It has SQLite library, which is loaded, and it has a web server that basically has a single function that's called register handler, and then you just write JavaScript for the handler, right? 00:23:06.840 |
So, there's a single call, it's called eval, and then the LLM, when it calls it, can also register REST handlers. 00:23:15.920 |
So, what this looks like, right, like, there's execute JS, and if you want to load a file that you've already written, if you're, like, saying cursor, you can use execute JS file. 00:23:23.600 |
What this means, if I use the same query, and, right, it's a prototype, so I'm a little bit aggressive on the prompting, it will suddenly write JavaScript, where it's going to be, like, you know, let me look at the table. 00:23:34.540 |
So, exactly the stuff from before, but it's already clever, because it's, because it's on it four, so it will actually look for the company table in code already. 00:23:47.360 |
Like, it won't stream back all the tables in my database. 00:23:52.000 |
So, it saves on tokens and whatever, once they return these tables in their schema, so you save two tool calls, right, and you save a lot of tokens, just by the virtue of having eval. 00:24:05.820 |
It has, like, DB SQLite query, but suddenly we're already, like, saving money. 00:24:11.820 |
And then, in the same call, actually, I forgot about it, in the same call, it actually already does the querying of it. 00:24:21.300 |
And it's well possible that, you know, you already get, like, some stuff at the beginning. 00:24:26.100 |
Suddenly, you get, like, the first 10 companies, and then it calls it again with all the companies, and then the LLM is able to show the result, right? 00:24:37.380 |
It, like, actually doesn't even take care of printing it as JSON, apparently. 00:24:40.780 |
It just, like, literally logs it out with the standard, like, Go syntax. 00:24:45.800 |
And then you get the query response, and this took two seconds, right, or, like, three seconds. 00:24:58.920 |
And so, what I can do now is, like, just save it as a global function. 00:25:03.060 |
It being Sonnet, and being kind of on crack cocaine, and just, like, deciding to do 15 things, it, like, generated 15 functions, sure. 00:25:12.840 |
I've got companies info, I can get a company by its ID, I can search companies, I can do all kinds of things by the virtue of just, like, two tool calls. 00:25:20.460 |
And one of them was just because of a syntax error, because my prompting's bad, because it's a prototype. 00:25:25.200 |
So, why not create a REST API endpoint, right? 00:25:29.780 |
Because it has register handler, so why not hook up all of these tools to, all of these functions to a REST API, which, like, all right, here you go, right? 00:25:39.540 |
Like, just, like, not really hard, it just, like, calls the thing. 00:25:42.360 |
Because it's Sonnet, again, it, like, generated, like, even more. 00:25:46.440 |
And then I asked it to generate, like, a website, right? 00:25:51.620 |
So, it's registering a handler for the JavaScript, it's registering a handler for the CSS, and it's registering a handler for the HTML. 00:25:57.460 |
It already has the REST endpoints, and boom, now I have a whole CRM. 00:26:03.440 |
I can just start working with it, and that was a single tool, eval, right? 00:26:10.500 |
So, I think we're leaving so much on the table by focusing on tool calling and saying, like, oh, there's, like, agents, there are these little widgets with these little creatures with tools, and that's our mental model. 00:26:20.760 |
Instead of being, like, no, this is, like, a magical genie that can create anything I want, when I want it, without even needing any big information, because it's all in training corpus. 00:26:30.960 |
You can edit the companies, you can add new ones, which will be stored in the database. 00:26:38.880 |
This is just, I don't even, but I don't need cursor, I don't need anything. 00:26:48.080 |
So, to close this off, LLMs are absolute magic, and I think you should think, get used to thinking recursively, right? 00:26:55.200 |
It's like, if you ask the LLM to do something, ask it to do the code to do something, and then once you have something that writes the code to do a certain task, ask it to write the code to write the code, right? 00:27:04.680 |
Which is kind of what I did with the JS sandbox. 00:27:07.080 |
I didn't just give it, like, JavaScript with loaded libraries, is that suddenly I have a JavaScript sandbox that you can use to create libraries that can then be loaded later on. 00:27:16.360 |
And those are APIs that I can then reuse in, like, different systems, and it's, like, all very circular. 00:27:32.500 |
And then I can create words that create an LLM that create words that create words. 00:27:37.740 |
And all of these words ultimately are going to make things happen in the real world. 00:27:44.220 |
But if you focus on just the thing that you need to make happen in the real world, you tend to forget that it's not just tool calling. 00:27:58.600 |
So just write the code to solve the problem as we have been used to, instead of saying, like, we have an agent and suddenly it does everything. 00:28:05.180 |
That's not through the LLM writes code that does everything. 00:28:10.820 |
So, yeah, Infinite Loops of Creation, I hope you enjoyed this talk. 00:28:15.360 |
And I hope that you are able to bring back the magic into LLMs, right? 00:28:20.720 |
Like, the sparkle, sparkle, sparkle, because there's so much more than what we're trying, than what we're thinking of them these days.