- - Bitter Layout, or the alternative name for this talk, how I learned to love the model picker. And hopefully you will too. So the idea for this talk started when I was perusing all the AI-first apps I use all the time and just realizing how similar they're all starting to look.
Very consistent layout between them all. And it's not just the chatbots and the Anster engines, it's also the creative tools, like coding assistants, like V0 and even Canva. They're all starting to use this very similar layout. They've got an input field, they've got this turn-by-turn UX, they've got this dropdown with just way too many models to pick from.
And it all feels like they're kind of retrofitting stuff into this chatbot UX. But don't worry this talk is not about if chat is the future or not. I think we've all heard that enough times, at least I have. And Swix did a good job of humbling all designers with this tweet, where he basically called all the design thought leaders out who were saying chat is dead.
And then, or sorry, chat's not the future. And then they show off their cool demo. And then shortly after we'll just go back to using ChatGPT. So, fair point. I think that right now we're in this state of a bit of a dualistic future of AI UX. So, I've called this first section Schrodinger's Chat because we all, you know, all designers know how many usability issues that the chatbots have, but yet we all still use them every day.
So, it's kind of like obviously they are the future, but at the same time obviously they shouldn't be. So, I won't go into my thoughts on if they should be, but I'll do some, some anthropology here, just getting everybody up to speed in case you're not familiar with the great chatbot debate.
I'm not sure if people realize how long we've been debating this, but I can track it all the way back to 2022 with this post from Linus Lee, which, this is a great post by the way, still holds up today, but he essentially says he doesn't think that exposing the raw text completion is really the right paradigm long term.
And so, you know, note the date, May 2022, because a couple months after that, we have ChatGPT essentially saying, hold my beer. And, you know, if that's not the right UI paradigm, it certainly didn't bother them. And I think a lot of other designers have kind of taken note of the escape velocity of that.
But still, the next year, May 2023, we saw some other great posts from people like Emilia Wadenberger, and then the next month, Maggie Appleton, who are making great arguments about why Chat's not the future. These, I think, hold up pretty well, but at the same time, yeah, obviously, you have other designers who are arguing how intuitive it is.
And then as we progress and ChatGPT hits escape velocity, we're kind of seeing everybody just start to meme the defense of chat, which is like, just look at the chart, you know? It's like, obviously, it's working, right? And I think there's something interesting about that. If you can kind of come to the debate with a meme, it means there's something a bit intuitive about your argument.
So, fair enough. But then still, even in March of this year, we've had people like Julian Lear writing very good reasons of why Chat should not be the future. And he's, like, showing clock speed here relative to the different interface paradigms, so it's very convincing. He pretty much says it's a bottleneck, but at the same time, you know, we'll all probably still go back to using Chat after this.
And so I'll segue into the next section here, which is called models and modes, and this is on this idea of the model picker, which is this other UI paradigm that's been developing alongside ChatBot's popularity. It's that, you know, I'm sure everybody knows what it is, but to be clear, it's this dropdown where you just have to select from a million different models.
And so I put this, I made this section in memory of Larry Tesler, if you're not familiar with him, he's kind of a big deal, invented, like, copy and paste and stuff. But another thing he was famous for was apparently saying, don't mode me in. I don't actually believe he would say this, but, I mean, the quote is, like, attributed to him, but, you know, he hated modes.
And if you're not familiar with modes, this is a setting and a UI where once you flip it, all of a sudden your inputs are mapping to drastically different outputs. And so Larry Tesler hated this, he thought it was unintuitive and he wanted everything to be modeless. And I don't know how many, or, sorry, to give an example of a mode, just to be clear, caps lock, this is a mode, right, you hit caps lock and now your keyboard performs differently.
And then a more recent mode, excuse me, I'm not sure how many people would agree with me on this, but I think that the model picker, sorry, this is my voice at, like, the worst time, I think that the model picker is a bit of a mode selector as well.
You know, it's, obviously we're dealing with stochastic output and general value applications, so everything, it's kind of not a distinct change in setting, but once you flip a different model, you have a whole step change of output. So, to me, that's kind of like a mode selector, and here's a quick video to illustrate this point.
This is a bit old, it's the, an older version of ChatGPT, but you can see I'm trying to use certain modes and the model is not supporting it. So I have to go through this menu and find which model allows me to use this mode. And so the argument here is like we're kind of putting modes on top of modes, right, you now have to match models to modes and it's not super intuitive.
They've actually done a great job of redesigning this lately, so this is certainly not like throwing shade at OpenAI. I think they have a great design team, but I'm just kind of illustrate a moment in time when this was super frustrating to me. And, like, you kind of just want to talk to the model and be like, here's my use case, like, what mode and what model should I use?
But I don't know, maybe, maybe the model will pick itself and so that won't work. So this is really illustrating the point of the, the flexibility usability trade-off. This is a design principle where pretty much you're constantly trying to decide, like, how well do you understand user needs? And if you can pretty much pinpoint them down, then you can create a very, very usable optimized UX for them.
And, but as you try to make a more flexible system, the usability tends to, to decrease because you're just increasing the amount of edge cases and the complexity. And the requirements. And I think that this is a trade-off that doesn't get talked enough about in this is chat the future debate.
You know, we generally talk about in terms of absolutes, but it's really less of a yes and a no, or, and more of like a timeframe and like what trade-offs are we talking about? And so I'd like to segue into this next section, which is going to push the idea of like when we design interfaces, we really need to consider the zeitgeist that we're working in.
So what are the trade-offs of the time? You know, what constraints we're working with? What timeframe could an interface be good for? The subtitle of this one is called the context of all in which we live and all that came before us. Easter egg for anybody who remembers that.
And so to get into this section, I'd like to lean into a theory from the innovative solution. This is the follow up to the innovators dilemma. And in this book, they try to give you some guidelines on theory of how to approach building products with this idea of product architecture.
So the architecture is generally this idea that you have a system and you're trying to figure out how the components in the system are interacting with each other or interfacing with them. And so when you start to understand how they interact with each other, you can see different attributes.
So they map them out to these two distinct sides of a spectrum. You've got integrated architectures and then you've got modular architectures. And, you know, generally integrated, this is more common in early stage disruption. And you have proprietary technologies. They're very optimized, interdependent. It allows vertical scaling. And then to contrast that, you have modular.
And this is generally when technologies start to commoditize. And they can be more interdependent and you can allow horizontal scaling. But the key point in their theory is that you're never really on one side of the spectrum or the other. You're instead kind of bouncing between the two. And the industry as well is having different parts of the tech stack commoditize and decommoditize all at once.
So as a designer or anybody else building, you pretty much have to figure out where are the strategic points to be integrated versus modular. And so their theory is like IBM is an example. When they started out making mainframe computers very integrated, then they shifted to personal computers and started to make it more modular.
And then, of course, ended up the whole computer itself ended up commoditizing at some point and they got out of that business. So thinking through today what parts of the AI industry are commoditizing and decommoditizing can help us think about how to design interfaces. And so, of course, the main question probably everybody would ask when you start to pose this prompt is, are the models themselves commoditizing?
Because this is kind of like the big topic that everybody debates. And it brings us to the bitter lesson from Rich Sutton. If you haven't read this, I definitely recommend doing so if it's not wise to build an AI today without knowing this lesson. But I'll take the TLDR for this talk is that we shouldn't assume that computation is constant as long as we're seeing scaling laws in effect.
Like as long as the next model is still important, which you can see it still is like every time a new model comes out, everybody's like drop everything and let's check out the new model. As long as the scaling laws are still in effect, then we can assume that the models themselves are not commoditized.
And so, the bitter lesson actually leads us to what I'll call the bitter design lesson, why not? Which is this idea that if the basis of competition is inference performance, then the UI itself must be primarily focused on conforming to the next model. So, said plainly, if the model's not commoditized, it's actually the interface that's the commodity now.
And until that changes when models overshoot user needs and you don't need a rocket scientist doing whatever use case you're doing in ChatGPT, then we can start to explore different integrations within the interface. But until then, the primary job of every interface really has to be figuring out how to conform to the next model's capabilities.
So, it's kind of a bitter design lesson because then you get layouts like this, which, you know, the bitter layout. Pretty uninspiring, not super usable, just not very cool at all, but the one thing this does really well is it can absorb the next model's capabilities. So, as soon as that next model comes out, jam it into the bitter layout, and then, you know, update your model picker, and your app is more intelligent.
So, you know, I hate this design, but, like, as a designer, like, you can't really hate on this ROI, which is you just add one line item, and now your app is NX more intelligent. So, kind of hard to debate this as a design decision. So, that's the bitter design lesson.
And the takeaway from this, I think, is, you know, I'm not ready to eat my words on saying Chat is not the future yet. But I think that it's quite clear that this one attribute of Chat, which is that it can really conform to the next model very easily, is one of the key features we need to keep in mind.
And so, the future of AI/UX until we see models stop commoditizing, or excuse me, until models do commoditize, will be that the future of AI/UX must conform to the next model. So, that's the bitter design lesson, but how do we go from bitter to sweet? Right, what comes next?
And, as most things in life, Brett Victor has already given a really good talk on this, so you should just watch this talk. It's much better than this one. But, it's called The Future of Programming, and he explains all of the kind of mistakes we've made over the past decades with thinking about programming.
And, specifically uses this, or for this context, he uses this example of how people found it very hard to go from binary to soap, right? The binary programmers could not understand. You could give up control to the machine and use these abstraction layers efficiently. And, they like to hand code everything.
And, of course, like, making this mind shift was really important, as we see in retrospect. And so, his lesson was, you should stop thinking in terms of procedures and start trying to think of programming in terms of goals and constraints. And, pretty much guiding programs with these higher layers of abstraction.
And, so, designers, I think, are actually pretty well suited to do this. You know, it'll be a mindset shift to jump from thinking procedurally to goals and constraints. You know, we like to be very detailed in how we design user flows and consider all the edge cases and all that.
And, it's very important. But, we are likely going to have to start to move up a layer of abstraction as we're seeing apps become more and more stochastic and dynamic. And, just more probabilistic. So, we can't really envision every possible path now. So, we have to think a bit more in terms of what constraints can we set and what goals can we set to get people along the happy path.
And, so, you know, design systems, these are already things we use to guide developers and our designers to goals and set constraints for them. So, what happens when we start using this for generative UI, right? If we start to collaborate with a model, do we have a design system that keeps them within the right constraints?
Quality assurance, is that like reinforcement learning? I'm not really sure, but like maybe it is when you go to like critique a design that a model has created. Will this be like a reinforcement learning loop? User stories, these are, you know, we're very good as designers as thinking about how to envision what the user is trying to do via user stories.
And, these are kind of like system prompts in a way, like can the model become also a partner in helping set the goal for a user? And, we can translate some of these user stories into the system prompts themselves. So, this is pretty speculative, but I think this is a nice prompt for the future of UI design in the AI age.
And, I'd like to end on this quote from Dario Amadai, who says he feels that generative AI systems are more grown than they are built. And, I like this as a prompt for inspiration for helping us do this shift mindsets and start to think about how the future of UX might be more one where it's less like a process of construction and perhaps more like a process of gardening.
So, if we can embrace these types of lessons and start to think about design in a new way at a higher level, perhaps then we can move beyond the bitter layout. Thanks a lot. Thanks a lot. Thanks a lot. Thanks a lot. Thanks a lot. Thanks a lot. Thanks a lot.
Thanks a lot. Thanks a lot. you We'll see you next time.