Since all the questions already got asked, who built an MCP server and it didn't work? So we here commiserate on how to actually build with the full spec. What are the hidden capabilities? Why they matter and how they light up? I work on VS Code. So this is a bias local MCP for development track, but all of it is applicable to everything.
I really love the intro to the track. It's all about its MCPs on high velocity. It's a lot of ecosystem growth, excitement, people working together, collaborating, but there's so much more work to do if they realize it's so early in that ecosystem. The criticism of the spec or the ecosystem is just we're so early and I want to point out where we can gain more powers.
And just 10 days ago on a Friday, we had actually this first in real life gathering of the MCP Steering Committee during the MCP Dev Summit. So that's how early it is. We haven't even met before. We just talked on Discord so we finally met in person the first time to talk about anything, how to evolve the spec, how to evolve the ecosystem.
And all the basics are kind of covered. Hopefully in the previous talks. This is my first MCP talk that I don't spend halfway through just explaining what MCP is. There's routes and the client, there's sampling, there's prompts and tools and resources. There's a really rich ecosystem to build dynamic discovery and persistent resources and rich interactions.
But there's a gap in how this is being implemented. There's this MCP is just another API wrapper syndrome that's happening because people just want to ship. They want to build products and they're actually building really excellent products with just tools. And that creates this reinforcing loop because once you see how MCP works, you're just going to use the same stacks and repeat the same tools only ecosystem.
And there's technical barriers. People do this because there's missing support in the clients and SDKs and documentation and the references. And the clients reflect this most. If you look at the adoption that's from the website of model context protocol, you see everybody goes for tools because that's where the most immediate success is.
And if you're honest, most of the resources and prompts, you can do similar flows just with tools. And VS Code did the same thing. When we launched two weeks, two months ago now with our MCP support, we started with tools and we already added discovery and routes because you're working towards actually reading the spec and implementing it.
And I'm happy to announce that with VS Code's upcoming release v1.10. Don't get it wrong. But it's already an insiders now. So download it. We actually have the full spec support and that's I want to talk about here about all the other things that people are not using yet.
Yes, that's why I'm clapping. Okay, so the message is if you go with full MCP spec support, you will can unlock these rich stateful interactions that MCP vision is really outlining on how agents should work together. Starting with the most obvious tools. So not going too deep here, but tools reflect actions.
Well defined performing actions and mostly easy mapping to function calling if you're used to that. And on the right side, you see playwright, you can start a server, it will open the browser and take a screenshot. But tools are often leading to quality problems. And we all struggle with that.
Raise your hand if you had like some error in your IDE that you couldn't add more tools and you couldn't run it or run wrong tools because you have too many. And there's research from Langchain that nicely underlines that and pointing out the three vectors of A, it's too many tools.
So AI gets confused by that. It's too many domains of tools. So if you suddenly have some different properties for each tool and instructions coming with each tool, then it also gets confused versus just a pure like this is UI testing. And lastly, it's just the repetition. The more repetitions of the AI has to do to actually run tools to solve a problem, the easier it is to get confused as well.
So it's really quality over quantity. And clients handle that somewhat. They give you extra controls. Like in VS Code, we added actually per chat tool selection. So there's a little tool packer and you can actually reduce down the tools of what you actually need in the moment versus all the tools.
It has nice keyboard accessibility. It's really quick to set up and will persist for the session. So that's one way. We have actually mentioning of tools. Like sometimes you're like pull this issue and trying to like verb out whatever tool you're trying to invoke. Like why not just use this tool and please make up all the right parameters to use it properly and then use the other tool.
So that's what we allow as well. And lastly, just in this insiders actually we're shipping user defined tool sets. And that's more of a usable concept. Once you get into the mode like these are all the tools I need for a front end testing flow, then you just put those into a tool set and that use my front end testing flow.
So that's coming as well. So these are all user controls. But actually that spec has dynamic discovery built in. And that means on the fly a server can say equals, but actually that spec hack are going to give you these other tools. And on the right, you see GitHub MUD MCP.
It's on GitHub. You can check it out. And this starts with a chat mode that I created that puts the agent into a game master prompt and it has the MUD MCP installed. So now with the mode active, I can go into the agent, switch to MUD, and play the game.
And what's dynamic tool discovery does here, it actually makes it aware of which room I am in. So dungeon crawl, you walk from room to room, you can go east and north, you can pick up stuff. And if there's a monster, I can battle the monster. But the tool for battling shouldn't be there when there's no monster.
Eventually, I advance through the game and I finally find a goblin I can battle. And the battle tool appears. I can battle the goblin. So imagine those MCPs you want to work on. Those are coming up to give servers and clients a little bit more. Really, tools or actions.
Actually, the add context. Return a giant file from your server, but you want to return a reference to the file. And that could be something the LLM could follow up on or the user can actually act upon. Then the other use case is actually giving files to the users.
So if you take a screenshot by a playwright, it's going to expose it to both the LLM and the user, and resources provide that semantic layer. And then what are the issues? Oh, I found your issue. That's . They want to understand the Python environment and maybe look at your settings of how you set it up so they can customize.
And that makes it more dynamic and stateful out of the box. The other one is if you can look at actual packages and your libraries installed, that's a great way to customize it to a React setup versus a Svelte setup and really acknowledging what the user is looking at and not asking constantly, like, what framework are you working on?
What framework are you working on? Like, just-- you work in my folder. So just look at it. And lastly, I think the idea of, like, what is that CI/CD pipeline? That's where MCP servers really shine to connect the end-to-end of a developer experience. And you can also read those out.
Sampling. Who has heard about sampling? Is really excited about sampling? OK, so you understand what I mean. So sampling-- sampling is one of the oddly named primitives as well. And if it had a better name, maybe more people would use it. But it's actually now Implant Insiders, and it's so much fun to use.
So it allows the server to request LLM completions from the client. And what I'm showing here on the right is the permission dialog that pops up to allow the server to access the LLM. Right now, it's wired up by default to GPT 4.1. There's more spec improvements to make it more structured formatting.
There's some ideas out there. So there's a lot of things to make it better. But right now, nobody has implemented it. So there wasn't really need to make it better. But implementations are here. So please use sampling. That's a nice progressive enhancement. Maybe by default, you return the kitchen sink.
And once you have sampling, you can do interesting things like summarizing resources into more tangible things. You can format a website that you fetch into markdown for the LLM. Or you can even think about agentic server tools that run via the LLM from the client. If you look beyond the primitives, there's a few things that are also interesting.
So far, we have routes and tools and resources and prompts. And with dynamic discovery, you can update them at any time. The client will send new routes as the VS Code workspace changes. You can send new routes, new servers, new tools and prompts from the server as you update and you change.
So it's a really dynamic environment already. But there's more pain points to make these servers really powerful. One is the developer experience. Who's been struggling with working on MCP servers and debugging and logging and everything? Yeah. One of the other hands up. Yeah. Yes. Apparently, it's really easy. So maybe it's not a problem.
Okay. So we have now a dev mode in VS Code, which is a little dev toggle. And you already see the console that always works for all MCP servers. So once you hit a snack, that just works. And then now it's in debugging mode. It actually has the debugger attached.
So once I run the prompt, which is dynamically generated on the server, I can now hit the breakpoint and step through it. And that's really hard usually because your server is not owned usually by any process that you run manually. It's owned by whatever client and host is running the MCP server.
So because VS Code is both, it can just put it into debug mode and attach its debugger. And that works for Python and Node right now out of the box. So super exciting. And it has changed how I work on MCPs, definitely. The latest spec was already called out.
I just want to call it out again because it's so important that people stay on the tip of the spec on what's coming and understand what's in draft. Those things that are in draft only become stable because people provide feedback that is useful and that it's working. And if they're in draft and nobody provides feedback, then they will still go into stable and they might need revisions like the off spec.
So the updated off spec on the right gives this enterprise-grade authorization. There's a talk tomorrow about building protected MCP server that I can highly recommend from then who actually worked on the off spec. So if you want to talk to one of the people behind it and want to dive really deep into off, you can do that.
Then streamable HTTP has been working in VS Code since two versions as well. But then it's been really hard to test because there's no servers out there. So if you work on hosting, you're really excited about streamable HTTP. You should really get everybody that is hosting MCP servers to get onto it and not use SSE anymore.
SSE is still possible to use with HTTP, so you get both benefits. But you're avoiding this really stateful churn on your servers. Last one I already mentioned, there's a community registry happening. And that's the other big pain point. Like if I build a server and nobody finds it, or what is the discovery experience?
Like how do I send people? Like do I send JSON blobs around for people to discover my server? There's a lot of community work around this to make this discovery easy. So it's a big shout out to everybody on the steering committee, the community working groups, and everybody involved here.
If you want to check it out, it's on model contracts protocol slash registry on GitHub. And it's all happening out in the open. And lastly, I'm really excited about elicitations. That's actually coming in the next draft, spec reference. Spec, draft, release, whatever. And this is a way for tools to finally reach out back to the user when they need more information.
Right now, tools are all controlled by the LM, and you get all the information from them. But then when it actually needs more concrete, specific input from the user, then you can throw them into another chat experience and ask for it. But why not just give them an input to provide it directly?
So it's, again, more statefulness in the tools on top. So your help is needed. Progressive enhancement in MCP is possible. I think we want to have more best practices out there, maybe even in the references servers, to show it off. But everything is now ready to be used. There's clients supporting the latest spec that you can run it in and test it in.
Those clients are used by users. And as more users showcase how great these stateful servers can be and outline these best practices, this interoperability gap will close and clients will catch up. It's a very fast-moving ecosystem. People are complaining, like, oh, you shipped this two weeks after the other person.
But it's all coming together. And as people use these and learn and bring feedback, it becomes better. So make action-oriented, context-aware, semantic-aware servers using the full spec. And then lastly, contribute to the ecosystem. If you have the time, read up on some of the open RFCs I shared, like namespaces and search, to kind of see what's coming.
Make sure they get into the SDKs you're using by following the issues. And just share back on your experience. I mean, a lot of people misunderstand how much influence they have on clients and SDKs and everything by filing issues, by providing feedback. I'm helping to triage a lot of the MCP issues coming into VS Code.
We read all of them. We learn from them. And really, that drives our roadmap. And that happens probably with every other team out there. So really make your voice heard of like you. Everybody should support sampling. So there's a transformative potential in MCP that we all can unlock with the spec that is already there so that the ecosystem catches up to the spec.
So with that, let's go. And feel free to hit us up on the Microsoft booth. There's two VS Code people there, Tyler and Rob. You can also talk to or talk to me or talk to your friendly MCP steering committee members. Thank you. Thank you. Thank you. Thank you.
We'll see you next time.