Back to Index

The Accidental AI Canvas - with Steve Ruiz of tldraw


Chapters

0:0 Introductions
1:2 Steve's Background In Fine Arts and Transition into Tech
8:22 The creation of tldraw and its open source origin
15:44 The Inception and Growth of tldraw
18:40 The Integration of AI with tldraw and Make It Real Feature
21:56 Discussion on Multimodal Prompting and Iterative Design
32:32 The Concept of Parallel Prompting in Design
34:11 Impact of AI on developer jobs
37:28 Additional Layers in Multimodal Prompting
45:18 Introduction of DrawFast and Lens Projects
50:3 tldraw 2.0 and the future of the project
55:41 The Competitive Landscape of Canvas Tools and tldraw's Unique Position
60:22 Advice for Founders: Following Your Interests and Desires

Transcript

Welcome back to Latent Space. I'm very excited to have my good friend, Steve Ruiz, how are you this morning? Hey, how's it going? Evening for me, but yeah. Evening for you. Well, thanks for staying up. I have had the good fortune of knowing you before you got famous and actually hanging out in the precise office and studio that you're recording from right now.

Congrats on Make It Real. Congrats on TL Draw. I think it's been something that's sort of years in the making, but it's probably looks like overnight success to a lot of people. Yeah, thank you. It's kind of a funny story. I don't know, where should we jump into it?

Well, I like to people give you a little background on the person. You don't have a lot of detail on LinkedIn, just like myself. I just found out just before recording that you're also a late entrance into tech. So maybe just like, you know, what's your background coming into something like TL Draw?

What makes you so unique at doing sort of creative, collaborative experiences? Like, you know, I know you and I've actually used TL Draw. So I have some appreciation for how hard this thing is. Like you said, I kind of came into this a little late and kind of came into it from a weird angle.

My background is actually in like fine art and studio art. So I have my master's from University of Chicago in visual art and would write about contemporary art and put together exhibitions and do my own paintings and drawings. And that was back when I was living in Chicago. And then when I moved over to the UK, kind of, you know, got a new studio, kept that going.

But when I turned 30, I kind of decided I should probably make some make some money and kind of work with other people closer than I was at the time. Studio art is primarily a solo thing. And I always had kind of like an analytical background or kind of side to me.

My day jobs were, you know, I was working for lawyers. I was doing this writing, like magazines and stuff. And so when I when I did that, kind of that switch back eventually to to design and product design, it. I was also able to use a little bit of like a tiny little bit of technical skill that I had had just building like WordPress websites for myself and other artists as portfolios kind of take that and just some natural curiosity around the way that the products work and kind of create a career direction that was more around prototyping and like technical design and doing that kind of like doing the design on the bits of a product that really couldn't be designed otherwise.

So the interactive bits, the bits which are maybe more there's more questions about them. There's no clear answer to terms of like, how should this work? You know, in all those places, you kind of have to build something in order to to figure out what you want to build.

It turns out, you know, to skip right to the end for a moment, like canvas is just full of those types of problems. So it's no surprise that I ended up there. It's like kind of an extreme form of the same problem. But yeah, so I was working. This was back in like 2017, 2018.

And I used at the time a product called Framer. That was back when it was more of like a more of like a code product than than what it is now, which is more of like a visual visual builder that is kind of backed by code. And so I sort of just drilled into that.

It was cool. Uber was using it. No one knew how it worked. No one could use it. So I got good at it and got a lot of advancement, early traction, whatever, in my career based on that. But it also, you know, taught me to code, taught me to think about, you know, building things that other people are going to use.

Taught me about kind of like the type of code that you write when you're in an exploratory phase rather than like in a like execution, like production phase. And that. Yeah. Kerry, I actually ended up working for Framer. I did their education for a year, which is very different than the type of product design that I was doing before that.

I did a lot of video tutorials and writing and tweeting, trying to figure out some way to make technical design content interesting, you know, in little chunks that people could could consume. And I joke that like they probably got less out of me in that job than I got out of the job itself.

Like because, yeah, I walked away from that. Not sure if I'd helped anyone really learn how to use Framer, but I certainly learned how to how to tweet and learn how to record a good gif and learn how to talk into a microphone and all that type of stuff.

And so in the next roles that I had, which were I worked for a company called Play out in New York, who is also doing design tools. And I really wanted to work in design tools after that. Play's doing like a mobile, I guess, or now it's like just general iOS, Mac OS platform specific design tools where you're you're using actual elements from the kind of widgets from that that component or component collection.

I don't know in your designs and kind of bringing that a lot closer to the end product. And at the same time, I started getting into open source. I'd kind of done some popular open source before. But this was now 2019. It was it was locked down. I had a little bit more time.

I also had a daughter, so not that much more time. Or I'm sorry, it was 2020. And yeah, I guess that open source that I started getting into started swinging back toward some of my kind of artistic interests or studio interests and kind of visual interests. Those are the parts where I felt like the problem space was was really underserved and really it wasn't necessarily like technical problems that were really hard.

It was more subjective problems where I think the thing that was lacking was the taste or the opinions or the like the feeling for what good solutions were. So the first kind of problem like this that I got into was arrows. Like I had, you know, two boxes or two points arbitrarily placed.

I want a good looking arrow, like a quote, Mark, like good looking arrow between the two. You know, well, that could be anything. That's not a math problem. Maybe it involves some angles and linear geometry and vectors and all that. But it's like the good looking part really was the was just like my own taste and my own eye and like tons and tons of iterations.

And arrows are super tricky. And there's a million ways for this, you know, edge cases when things are overlapping or things are too far away or too close and all this. But I was working on this and I was working on this in public on Twitter, recording GIFs of boxes and arrows kind of squishing together and all that.

And I think people really like that. And they liked kind of following me on this somewhat obsessive journey, which was technical, but it wasn't like it wasn't like trying to crack an algorithm. It was like trying to trying to figure out and identify the the rules governing an aesthetic experience or an aesthetic, you know, thing, which was a good looking arrow that became perfect arrows.

And that was pretty popular. But the next one really is what what kind of broke my popularity on Twitter or just in the space, and that was a project that ended up being called Perfect Freehand. This is a little hard to describe. If you've ever used like an iPad pencil or drew with like a stylus and Photoshop or something like the harder you push, the thicker the line gets and the lighter you push, the thinner the line gets.

It kind of is like this ink experience. And that's it's not a easy problem. But if you're doing it in a kind of a Photoshop style, like raster environment, you know, the solution is pretty straightforward. You just draw like. You interpolate like tons and tons of tons of whatever shape you're drawing in between each point that you've actually moved your mouse to.

And you just change the size of that little stamp that you're making. So it's like a little circle, slightly bigger circle, slightly bigger circle, slightly bigger circle. But they're all really tightly packed together. And so it looks like a kind of a a line that's changing its width as it moves.

My. Angle on this. The reason why I spent so much time about it was that I wanted to do that using vectors. I wanted to get a bunch of points in and then like a polygon that sort of defined the outside of that shape coming out, because I like to work in SVG and.

And it turned out that this was like an insanely hard problem that no one had solved. And if they have solved it, they certainly didn't open source it. But I couldn't find any good example of a variable width line that actually worked fast enough and consistent enough, et cetera, for for it to be digital ink.

And so, again, I did this in public, did this on Twitter, a million GIFs of lines that look terrible, but, you know, like slowly attracting more like getting closer to the solution, attracting more people who had solved this problem or tried to do this or they wrote their Ph.D.

on ink. And let me tell you about, you know, how arcs work in this environment and all this stuff. Wow. And I yeah, it was it was fantastic. Like I met so many good, you know, people who had like we're experts on this or or something like it. And slowly we made a like a really, really good, tight little library for for doing exactly what I wanted.

Like here are a bunch of mouse points or just arbitrary points. Like, give me back a polygon that surrounds them. And let me essentially draw a line around the edge of that polygon, fill it in, and it'll look like ink. And so that was perfect freehand. And that's now used in like Canva uses it like Draw.io uses it.

ExcalDraw uses it. We use it at Tealdraw all over the place. Yeah, it's it's just like a significantly better than the next best solution in that space. And there really wasn't even any known solution in that space. So someday I'm going to be checking out at a hotel and see my own ink and, you know, a little iPad or something like that.

And so. I might as well just kind of finish the whole origin story or something. But that's kind of led right into Tealdraw is that I had integrated my ink into ExcalDraw and I spent time in that that code base. And I'd also like created several like infinite canvas like tools to help me build perfect freehand and visualize it and sort of, you know, do my ink and pan and zoom in and program against this thing.

And so I had done including Globstuck design, which I won't necessarily talk about. But it's a yeah, it's kind of like a weird experimental design tool. But anyway, it was like as an infinite canvas. It was very much like, you know, Framer, Figma, et cetera. And after doing ExcalDraw and been working on these kind of projects that were in the same area, I was like, you know, maybe there's there's a market here for or not even a market.

It was just like, I think the thing that I want to work on next is is like a general purpose, kind of like whiteboard like engine. Mostly for myself, I built globs, but the only thing that you could put on the canvas in globs was a glob. So I had all this like code and these solutions that I was like hanging around.

It could kind of see how I would adapt it. And so that's what I started doing. And that was the next story that I was kind of telling on Twitter is that like, OK, here's how selection works in something like Figma or or something like Miro or Framer or Sketch.

These these kind of like not heuristics, these sort of conventions that are part of this really complicated thing called like the infinite canvas, you know, going all the way back to whatever Flash and before then, you know, Adobe Illustrator and before then all the way back. And they're all pretty consistent between products.

Like if you're making a canvas this way, you have to kind of do them all. Like your undo, redo should work in a specific way. Your selection should work in a specific way. Like, you know, the camera position and how the camera moves should work in a certain way.

All the modifier like option drag to clone and then all of those became their little vignettes of of how I was building this thing. This was now like spring of twenty twenty one. And I had everyone from any infinite canvas related creative product kind of like in my inbox being like, hey, can you come work for us?

You know, like, let's talk. Let's do this. And so I was either going to go work for Figma or Adobe. And I ended up going with Adobe in part because I think FigJam had just come out. And the team at Figma were like, well, this this is competitive with with FigJam.

I'm like, this thing is like nothing. It's like a little open source. You know, it's like no one uses this. It's just me trying to get to 10,000 Twitter followers. But, you know, it's mine. So, no. So I went with Adobe and I told but I told them, I'm like, I don't want to start for six months.

Like, this is actually a pretty fun project for me. I want to get it out of my system. You know, let me start in January and just work on this. And so they said yes. And I, you know, quit my working with play and and said, I'm going to go work on this little open source thing for six months.

I have some, you know, contracting money in the bank. Let's drain the company account and and do this. That's not what happened. The like I went full time from a Wednesday on Thursday. I had a very large communications company say, hey, we're moving our whiteboard that we've designed for specific touchscreen devices.

We're moving that into the browser. It turns out people want to use the whiteboard on their phones. And on their laptops and all that, like like they do with Miro. And so we need to make this thing that we wrote in C++ to be highly performant on these, you know, kind of tiny microcomputers that were part of these interactive touchscreen TVs.

We have to make this work on the web and we don't think it's going to be good enough. We have to build from scratch. We don't have the team. Can we just build on what you're building? At the time, this thing was an open source. It was just sort of but it was getting there.

And I'm like, yeah, sure. Like, give me like seventy five thousand dollars. I'll let you see the source code. Just, you know, don't I don't want to talk to you very often. You know, I'm not working for you. I never want to see your code, but you can look at mine.

And they said yes to that. And so I was, you know, funded for those those first six months. And I got to work on this, you know, without having to feel bad about it. And I'd also opened up, eventually opened up TealDRAW to be like sponsorware that if you were sponsoring me on GitHub, you could you could access it, you know, in its kind of primitive state on TealDRAW.com.

And it had like a couple hundred people join that way and sponsor me. So at one point, like my sponsorship was, you know, over five thousand dollars a month, which is not massive money, but it's like I wasn't doing anything different. So it was pretty good as a kind of a passive thing.

Anyway, I shipped it at the end of November 2021, and it was very popular. I just open source everything. I was just like, you know, the TealDRAW.com app, the library, the canvas. And, you know, I was organized in a certain way. I just made it all public. Everything was MIT.

You know, let's just let's just throw this out into the world and see where it goes. Well, it went pretty far. It was like number one on Hacker News for a while. It was like the top trending repo on GitHub. A lot of people like 40,000 people showed up at TealDRAW.com to use it on that launch date.

Which was all good. Like so far, this was all within my same narrative of, OK, this is cool. I'll make this and then I'll go do something else afterwards. The thing that really surprised me was how many teams wanted to build on this. And they weren't like they weren't building whiteboards.

They weren't like Miro competitors or Figma competitors. They were just like apps that you wouldn't expect to have infinite canvases inside of them. But and they wouldn't have built it except that I had suddenly made this very easy. And I had suddenly shrunk the development time of this like whiteboard like feature in their product from like three years and three people to three weeks and one person.

And not even one person. Just like, you know, no new developers, no new team, no new graphics experts. No, no, no. Like computational geometry guys like, you know, we can do this. We the canvas itself is like React all the way down. So even if you wanted to customize it, you'd just be writing React components and then a little bit more code on top.

It got like I was totally overwhelmed by inbound from companies who were like, I want to build this or I want to acquire you or I want to, you know, I want you to build something for me or, you know, I want this in my app. You know, how do you help me or how can I do this?

And and people were shipping things also like within two weeks, three weeks, like production ready, like people had taken this and run with it. And so the the story that I started to get around Tealdraw was that like, OK, well, this is this is a cool little whiteboard, but it's also kind of like filling a gap that no one knew was there, which was that like in the same way that like Mapbox or Google Maps, you know, provide maps for apps that would definitely not build maps themselves.

Like maps are insanely hard, like your little local food delivery app like wouldn't just wouldn't have a map in it, you know, like easy. But it is a value add. If they can have it in there, then absolutely it is a value add. It's just completely impractical to do themselves.

And what I learned talking to folks was that like every PM had used Miro or used Figma or used, you know, one of these other collaborative tools. And every creative product person was like, well, this is fun. Collaboration is fun. This canvas thing is pretty cool. Like, why don't we do why don't we why can't we put our information on, you know, why can't we put our CRM on the canvas or why can't we do our sales stuff here?

Like we're already kind of using Miro for this. Like, why don't we give why couldn't we give this to our customers as well? Like, why don't we build a product around this? And it was just a technical no until, you know, November 24th, 2021, when suddenly it was like a technical maybe and yeah, there was there was absolutely demand.

So hence the, you know, I had to call Adobe and say, no, I'm not going to come in on Monday. Like it turned out that the best possible outcome of this happened and there's actually a company here. And then I went out and I raised a seed round from Lux in New York and Amplify in California and a whole bunch of really great angels.

You know, on the story of, yeah, this is this is cool. It's good app feels good. Companies want it. And, you know, by then I had had almost $200,000 of sponsorship, you know, and people were just signing up and signing up because there was no way to even be a customer.

Like sponsorship, sponsorship. And but also, yeah, there's a massive 200, you're not saying 200k a month. That's no, no, no. But like, I mean, I had had up to then, the amount of sponsorship that I had received was around $200,000. I think some of the recurring stuff was like, like 5,000 a month.

But yeah, like it was. Which is in the top echelon. A lot. Yeah. Oh, yeah, like, yeah, yeah. Certainly the but yeah, and just the amount of like kind of validation that had come in around this was like more than more than usual. So race company or race around, put together a team here in London, and basically had just been building this, this whiteboard SDK.

Since then, you know, we reconfigured the project around, okay, we're going to be building this not necessarily for end users, but for teams to use as kind of an infrastructure product, a developer product, something closer to Mapbox. And, you know, we were making demos to kind of like show different ways that it could be used.

Certainly the collaboration thing is a big one, but the fact that you could put anything on the canvas that you can put on a website, just because it is a, it is a web, like all HTML, CSS all the way down. And that was going really well. It was already a good story.

And then I just raised like a 2 million extension for the company. While I was on the final pitch for that, the Dev Day was happening at OpenAI. And in the morning I woke up and I was getting all this kind of action on Twitter because a developer at Figma had used teal draw to make this little demo where you could draw a website, click a button and get back a big pop up that had your website in there.

It was like a prompt, like, you know, hey, you're a developer, you just got this wireframe from your designer. Can you give it back as a single page HTML file? And it would do it and it could do it. And then you could show that website to whoever is using the app.

And we took that and we're like, wow, you could do so much more with this, like with teal draw, just like it's only scratching the surface of the type of integration that you could do. So again, we had just finished the race. Pressure was off a little bit. It was kind of getting towards the end of the year.

I was like, all right, let's let's just take this and have some fun. Let's make some some viral shit. Maybe we'll get like 200 likes or something like that. And it exploded. It was like, I think we're at like for this month, last 30 days, like 22 million views or something like that.

It's just like it was Kanye West numbers. It was really, really, really popular for a couple of days. If you're on Twitter and at all technical, you might have just seen a ton of teal draw stuff on your timeline or about two weeks ago, three weeks ago. Well, so, yeah, that kind of brings us up almost to today.

You just released something two hours ago, which we'll talk about. But maybe this is a good time to bring up the screens, you know, for those who. Oh, yeah, yeah, let me share. Yeah, so we're recording a video as well. You can jump over to the YouTube to see stuff.

But this is an inherently visual podcast, so we have to show stuff. OK, the incremental thing I got from your blog post. So you did do a write up, which thank you for that, because I actually didn't know that you did a write up. It was just drawn up.

Oh, yeah. All the videos. This is the power of open source, right? That someone else had the idea. You weren't even focused on Dev Day. Someone else had the idea and just made it without your permission or talking with you. And then the idea could spread back to you and you could run with it.

Yeah, exactly. And we had made a lot of the bits and pieces in place already based on, you know, I mean, it's well documented or it's documented. It's documented. It has tons of examples and all that. Yeah, and it's not, I mean, it's a big library as far as an open source library goes.

But it is, yeah, you can work with it. And once this thing got popular, like the first thing we did was create like a starter kit so that someone could take it and like run with it. Yeah. So this is normal teal draw where, yeah, you can, you know, you can draw, you can whatever, move things around.

It works if you've used Figma, if you've used Miro, it's kind of familiar to that. And you can put pretty much anything on this canvas that you want, like YouTube links, etc. Just because it is, I always have to remember to open YouTube in a incognito window so that people don't see my embarrassing interests here.

So yeah, because this canvas is HTML and CSS and like divs and stuff all the way down, you can put things like YouTube videos on there. You can even make them play. Because again, like anything you can do in a website, you can do on teal draws canvas. What's fun is because it is a canvas all the way down, you can also like draw on top and like do the kind of canvas manipulation stuff that you might do with normal shapes, but also with this type of content.

So that ended up becoming like a big part of why Make It Real got kind of popular. So anyway, I'll show you Make It Real now. This was a hastily built layer on top of the kind of teal draw engine SDK that we sent up. And the idea here is that you can make a wireframe and we're going to send it to GPT-4 with vision with like a prompt like much like the original one that that Sawyer Hood had come up with, which is you are a web developer.

You work with designers, they give you wireframes and notes and screenshots and all sorts of stuff could be anything. Your job is to come back with a single page or a single HTML file that has all the styles of the JavaScript, all the markup necessary in order to make a real working prototype based on what you've been sent.

It also has slight emotional manipulation like you love your designers and you want them to be happy and like the better your prototype is, the happier they are and all that. Oh, in the prompt? Yeah, yeah, yeah. Again, it's open source. You can read the prompt. It's kind of a funny one.

But we do some other tricks that I'll like, for example, we'll have a second as I do this. So you select something like this, you click the Make It Real button and you get back a little waiting box. We've considered running ads or like in this waiting moment, but no, I don't know.

You could. Maybe like a kind of Zelda style like tips, you know, tips and tricks like here's different ways you can do. But yeah, for example, like we also send it. I mean, this is part of the joy of like a multimodal prompt is we send it the photo, which kind of looks like the same as if you had done a copy and paste thing.

So like an image. And you have all this functionality worked out, you know, like that's what I find so poetic about this that we're just ready. Yeah, it's like it really I mean, it feels like we had gone off in this like this very, you know, as collaboration and AI and stuff was going in one direction, we kind of just went off in our own weird like, hey, the world is really going to need a whiteboard at some point direction and then and then it just they kind of met us where we were at and then we've been able to just be like show up like show up on day one of this new world of possibility with like the thing that like if I hadn't spent the last two years building this, I would spend the next two years building this like it is the right product for this type of type of feature.

So anyway, they give us back a HTML, we stick it into an iframe, put that onto the canvas, just like we did with that YouTube link, and I can, I can interact with it. So it should be going from orange to pink, orange to pink. Hey, it's given us a hex code, I can click the hex code.

And it gives me, you know, YouTube links, etc. Just because it is, I always have to remember to open this open YouTube in a way that in something like v0 or some of these other kind of prompting environments, like the only way for you to then make this better.

Maybe you can do this with chat GPT or something and you could you could write like, oh, actually, you know, instead of or like you missed the labels like it should say orange and pink, you know, on top of this thing. And, and it doesn't. So you could go back here and like, you know, make, make sure that this is whatever you can change the input.

But because this is teal drawn, because you can draw on top of this stuff, you could also, you know, right on top, like you could kind of modify this, and maybe even give it so the same type of markup that you would give to a designer, you know, a designer or something like that, you know, draw some arrows, or maybe paste in a screenshot and say, hey, make it look more stylistically close to this other thing.

And I'll say this as well. I'll say like full width for that, that button, and anything else that we, well, let's just see what it does. And then what you do is you select, you select the website that they gave you back the previous result along with all this markup.

And you use that as the new input. And so that's going to give you something kind of like, yeah, it's going to give you an image that looks like this that you've now sent. But we've also kind of tweaked the prompt a little bit when you when you do include a previous result and say like, hey, this, you know, the, the, the wireframes coming back are annotations or markup based on, on what you sent before.

And there, there you go. So now we have a new prompt that sure enough, the, the labels are there, you know, it still works just like before the button is full width and, and, you know, it still works just the same. So we send it back. Again, we send it the image, we send it the text, the prompt.

We also send it all of the text items themselves separately, because ChetchiBT is not really great with, with recognizing text. So we say like, oh, by the way, your vision's not so good. So we've, we've ensured to have our copywriter, you know, list out all the, the copy that you can use.

I think we even send it back the HTML that they used for the, the, the previous result. So we just dump like as much information as possible at this, the, the GPT-4 with vision. And that's how you're able to, to get these sort of iterative results. And the, it is, it is like legitimately good to, like, it feels like work.

It feels like, like you're actually doing stuff when you're iterating through this way and, and slowly shaping and adding complexity and doing step-by-step, you know, as you're, you're, you're building something. And when you're done, you can, you know, copy a link to that and, and open that in a new tab, like we, we host it, it's there forever.

You can, you can bookmark this if, if you really just needed a slider between orange and pink. Well, now you have one, you, you, you know, whether you could code it or not, or you maybe not worth building or using a no code tool to build, but, you know, we just made that in five minutes.

If you are more on the coding side and you want to use this as a kind of a foundation of, of a real project, or maybe just to like, see how it like, well, how did, how did, how does that actually work? You can open it up in StackBlitz or CodeSandbox.

I think tomorrow we'll have Replit. And yeah, see all the code, see what Chachapiti came up with and, and kind of use it or adapt it or, you know, keep it going or do whatever you want with it. But yeah, cause it is, it is like, it is real and yeah.

You make real. Yeah. Yeah. It's, it's interesting that you can also, I've seen some of your other demos. It looks like you're, you're about to move us on to another. Yeah. I'm, I'm going to grab a couple of the, the ones that I have showed before. This one's a really interesting one because it is.

Okay. So what's on the screen now just to, to kind of. Narrate. Describe it is, is I have a, I have a drawing of a stop, like a kitchen timer, you know, where you can add a minute, add a second, you know, start or stop the timer or, or reset the timer.

And then next to it, I also have a state chart, like state machine describing the three states of the timer starts or stopped running or complete. And like what each one of those buttons could, should do in terms of transitions or changing the state. I think you can hand this to pretty much any designer or developer and get back a, you know, a working result.

Like it's fully, fully spec'd sort of, sort of. But what, what you can do with this. I made a friend that did it, David Crofied might say. Yeah. Develop a state chart first and then, you know, plug it into it. Yeah, exactly. And what's cool this way is that you can, well, let's do a couple of things in parallel.

First thing I'm going to do is I'm going to, I am just going to make a box over here and I'm going to say, kitchen timer. Right in the middle of the box. And, and this is going to be the only prompt that I'm going to, I'm going to give it.

I'm just going to click make real and just the, the kitchen timer box. Sometimes you see with these multimodal prompting, like someone will draw a calculator, like in, in a lot of complexity and, and say, you know, make this, make this real. And sure enough, you get back like a really complex full calculator.

But if you did the same thing and you just said empty box, but just the word calculator, it would give you back the same thing is that it knows what a calculator looks like and it knows how it works and all that. So next, let's also give it the, just the, the user interface, like without the state chart.

Well, we'll leave the state chart out, but we'll do just the user interface. And then we'll do just the state chart, you know, and say, Hey, make this real. And then we'll do both the state chart and the UI. So we have four different prompts with four potential different results based on, you know, variations of the same, same input.

So first off our kitchen timer, where all we did was we sent it a box with the word kitchen timer. It has, I don't know what this box is for, but we have a time. We have start, stop and reset. I can double click in. I can click start.

It doesn't do anything. Oh, what is this? Oh, whoa. This is the, if this, okay, well, if the number is there, yeah, then it'll, it'll stop. If I stop it, it goes, it stops. I can start it. It'll keep going again. Okay. And I can reset it. And there we go.

The only weird thing is that it's, yeah, it has a, a number input field for the number of seconds that I can, I can type out, but yeah, you know what, in a pinch, in a pinch, I'll take it if I really needed just to count 60 seconds or something.

Next, we have the input where, or the result where the input was just my drawing of a kitchen timer. I didn't tell it it was a kitchen timer. I didn't send it the words kitchen timer, and I didn't tell it how it should work, but it did produce something that kind of looks the same.

Let's see if it works. So I'm going to click minute, second, start, reset. No. So unfortunately, it did not make any working UI, although it did, you know, put the buttons in the right place or something like that. Maybe it over focuses on the UI because you, you told it, you just, that's all you gave it, you know?

Yeah. Yeah. I mean, there is in the prompt kind of language around, like, you use what you know about the way that applications work in order to sort of fill in the blanks here in terms of the behavior and all that. But let's go to the next one. This one is where we only sent it the state chart.

There's also something in the prompt that says like, if it's red, it's not part of the UI. Like if it's red, then like treat that as an annotation rather than like a, a thing that you should, should actually make. Uh, okay. So this time it actually looks a lot like the previous one, uh, but it does have these, um, minute, second buttons.

Oh, weird. It has plus and minus for minute, minute, seconds. Uh, and it also has this like stop state written at the bottom. So there's four buttons, you know, minus minute, minus second, plus minute, plus second, and then there's starts, uh, start and reset. So does it work? I can add a minute.

I can also subtract a minute. All right. I can add a second. I can also, yeah. And if I press start, we're now in the running state. Uh, unfortunately it's going up rather than down. And I, and I can reset it and, and okay. Uh, I'm just curious if I, if I do give it a, an additional prompt here and I say like, uh, uh, this should count down, not up.

Uh, and just kind of do an arrow towards the start button here. Let me see if that, uh, that'll make a real one. But, and then finally we look at the other example, which is where we sent the state chart and the UI. We get something that looks much, much more like our user interface.

The question is, does it work? Yes, it does. Uh, perfect. I can stop it and start it. Yep. That's a reset it. Wonderful. Uh, and in this case, yeah, my, my, my, my feedback was accepted. I went back to the one where I, I'd asked it to count down and not up and it all looks the same, but now it's counting down.

So I think for folks, especially who have like, uh, who have worked in design and who have worked in sort of like user experience design in particular, like this should feel pretty familiar of, of kind of sketching out and trying to do your best to specify, like what it is you want and see what you get back from your designers.

You can see what you get back from your developer. Um, but having like a, an environment in which to have that, like game loop, that like iteration cycle, uh, alone and in instantaneous and essentially free, uh, is really, really wild. And, uh, you end up spending a lot of time kind of like, uh, not only getting into the head of the, the AI and sort of being like, okay, well, why are they getting confused?

You know, what am I sending that is confusing? How do I send more information in order to like produce a better result? Uh, but also it really forces you to clarify your own expectations of like somewhere up here, I have a, uh, uh, uh, drag and drop list, you know, where you can drag list items between, and like, I started working on this and started specking it out.

I was like, man, this is like, actually like not only really hard to, um, like to produce a good result, but it's also like just really hard to describe is that like the failure was really on, on my end for just not knowing how to get the information in there because I didn't actually know how this thing should work.

Um, but yeah, but you know, I could figure it out. I have an environment in which to figure that out. It's fun. That's amazing. Uh, I'm still processing. Oh, you have a picture of it. Oh yeah. All right. So this is, this is kind of like, uh, slightly obnoxious, but, uh, during this math, like, cause this thing went massively popular on, on Twitter.

Uh, uh, yeah, like thousands of retweets type of level. Um, and there, there were some folks who like were subtweeting it about like, you know, get over it. It's just a wireframing or no code tool or something like that. Uh, one guy did say like, you know, I prefer to, to wireframe like the old fashioned way with pen and paper.

And, uh, I was like, oh yeah, no, that, that works too. I like this works with screenshots. I can just take the screenshot here of, you know, that the, the dude posted of, of the, uh, the, the drawing that he had made, you know, it's not even like a good photo.

There's a pen, you know, across one of the screens, et cetera. But if you just give that with no other information, uh, to like, as a prompt, uh, you get back a pretty good result, like it loading right now, but, uh, I've done this before and yeah, like you get back, um, you know, just from this like photo of a, of a piece of paper on the, the, the guy's desk, you know, you have a, um, like not, not completely arbitrary, uh, result like working website here.

Um, that was, that was, that was inferred from just that picture with no other input, not, not even like titles or anything else. And of course it's like responsive and all this stuff. Uh, and so the idea of like, yeah, like our tools, yes, I've, I've worked really hard to make all of our shapes, you know, really good in our arrows, obsessively good and all this stuff.

But like the fun of the infinite canvas and teal draw in particular is that you could just dump like whatever you want onto the canvas, screenshots, text, images, other websites, uh, sticky notes, all that stuff. And, uh, the, the, the model, even as something that was in preview, like the very, very first sort of multimodal, um, model, uh, can do a really good job at just taking all that stuff as the input.

And, um, and yeah, like, so we accidentally made a really, really good visual multimodal prompting, uh, uh, application environments or, or user experience byron. I'm not even sure what we're going to call this thing. It's a teal draw. You also had a, you know, pre-show prep. You also talked about parallel prompting.

Is that basically just, uh, prompting and then moving on to something else? Is that, is that what? Yeah. That's kind of what we did up here with the, uh, the stopwatches, the fact that we could get multiple prompts going at the same time and like arrange them spatially. Uh, and, um, people have done this also with, with imagery, uh, to say, okay, well, here are, we're going to use Dolly and we're going to kind of like make a tree of prompts as you go, you know, different iterations based on, um, which, whatever you make for iterations, you pick your favorite one, you keep going kind of like what you do in mid journey.

But to have that spatial and to have that like, uh, arranged on a canvas so that it actually can make sense to you. And you can kind of look back and follow it, follow forward that like, I don't know, like whiteboards, infinite canvas stuff is just really good for a lot of things.

Um, and, uh, so organizing like a whole bunch of different content that is, uh, irregular or ephemeral or, um, or, or has a kind of like ad hoc meaning configuration, like, you know, things that are next to each other or things that are in a grid or in this case, you know, uh, just even what we're, what we have here for what we did with the stopwatch, like there's an implicit meaning of like, okay, the source is on the left, the result is on the right and any further iterations are further on the right.

Right. Like we didn't put that into a data model. We didn't structure that in any way. It doesn't actually, that meaning relationship doesn't really exist in any part of the product. It just exists to us because we, we can make sense of it. Um, and yeah, so for, for this type of thing, not only is it cool that now a model can make sense of it as well, but, uh, but yeah, for, for organizing, uh, complex iterations of imagery, complex iterations of, of, of outputs, et cetera.

Like, yeah, the canvas is a place I really do believe that. Yeah. Um, I mean, that's, that's, that's really incredible. It's also, it's a little, you know, I think a few developers are kind of scared about, you know, how much of this, uh, their jobs, uh, this does, obviously, uh, there's a lot more that they can't do.

Yeah. So the, the, will this take my job story is, is interesting. I mean, I'm not actually concerned, but I think this augments, like, uh, my, actually my concern as a developer is that this is good, but not good enough. You know, like it's good for throwaway UI. Um, but I, would I actually export the code and take that code?

Um, I don't know. Uh, it, it looks like, uh, your first MVP was just HTML files, which, you know, if it's a single HTML file, it can, it can have some JS or some CSS. I saw some problems with layout, uh, in there, which I don't know how good it is at layout.

Uh, it's, it looks like you could just prompt it for tailwind if you want tailwinds. Uh, I assume it can generate react. Uh, I don't know. What are the limitations of this thing? Well, there's, there's the limitations that are in that particular demo, which is that like, well, it couldn't do react because it needs to just be a single compiled thing.

Uh, you can't do any multi-page stuff or, or anything like that, but that's more of like how we're structuring the, uh, the project rather than like a specific requirement of, of the project itself. The, I mean, we've talked about having, well, there's, there's two kinds of things. There's one is like, how big is the input window and how big is the output window or something?

Uh, so in theory, you could have, uh, the input be, here's a entire full stack react application, uh, together with all my UI and all this, all my components, et cetera. And here is a screenshot that I took of the landing page where the menu, uh, is in the wrong spot, you know, and I'm going to annotate that with some arrows and some text in order to say like, here's what I, where I want it to be, or here's what I want, et cetera.

And for the output to be, um, you know, a diff that I can apply to my code base, like produce the, the, the, the git, like basically like produce the commit, um, that would change this and have that commit be against multiple files and et cetera. Um, in order to, to have potentially like a solution that is just, uh, like ready to go applicable, like a patch or a PR that you can make, um, there really isn't any, any limit in that.

And we've seen with copilot, et cetera, the challenge is more on the input side than the output side. Like, like absolutely you could figure out a way for this thing to spit out like a working iOS app or something like that. Uh, the question is like, how do you tell it what you want and, and how do you iterate when it gets it wrong?

And, and just doing zero shot, zero shot, zero shot. It's like really a frustrating, um, process. Uh, but if you do have a way of, of iterating, if you do have a way of kind of like step by step moving towards the, the solution that you want and kind of like, you know, getting it into like, okay, well, this is good, but it's not great.

Could be better, et cetera. Um, that's, uh, that's how you actually make that type of complex output more practical or more realistic. Um, is that you probably won't get ever get the prompt just right. Even if you have like a really, really, really good, you know, three generations from now agent, like you, you still have to put that information in.

Um, but you're never going to put the, all of the information in the first time you need to be able to, to iterate on it. And so with visual stuff, uh, I feel like the canvas, like what we were looking at, um, that's part of what it unlocks is that like space of iteration at space of, uh, um, you have a way of marking up the result and using that as the new prompt.

Um, and that's, that's kind of new. Yeah. Yeah. Um, multi-modal prompting is such a brilliant concept that, uh, you know, I think it's, uh, it's going to be a norm for, for, for some things. Um, in my mind, you demonstrated, you know, coming from Photoshop, there's this concept of layers.

Um, you can, you can kind of simulate layers in, in TO draw. And I see like emergent property of layers in, in this kind of prompting, which is there's the UI layer. And then there's the state chart layer. Um, and those two things seem like pretty useful in, in, uh, spec specifying a prompt.

I was just wondering if you've, if you thought a little bit about like other dimensions or other layers that would be useful in multi-modal prompt. Yeah. So, uh, one thing that we've done is to bring in screenshots of other apps, like here's stripe.com, like make it look like stripe, you know?

So that's sort of like, like, or like here's linear.com. Like let's, let's do it this way, uh, you should just, you should just like give a design and ask you to make pop instead of make real. Yeah, exactly. Make it, make it more, make it more, make more pop.

Uh, so there, there's the idea of like bringing in style as like a, as another, um, another in like part of the input, uh, flow charts are absolutely useful. Uh, I mean, this is, it really just boils down to like, what would you really give a developer who you are working completely asynchronous with?

You know, if you had to, uh, spec out a project and put that printed out on paper and mail it to a developer and they were going to mail back a disc with an HTML file on it, like what would you send, uh, if you were sending this to the moon or something?

Uh, uh, so yeah, definitely like descriptions of how the state should operate and specs on that. Uh, we've, um, what else have we done? We've done flow charts, we've done screenshots of other apps. We've, uh, I think we've, we've even just pasted in code, like, like here's a whole bunch of Jason that you can use, um, and have it just read that as the, as the input data.

Uh, we can, you can point it at specific endpoints. You can say like, I want you to hit this endpoint and then display the results, you know, as, uh, as cards or as items or something like that. Uh, not, I mean, you don't even have to like wire this up.

It's not like retool or anything where you, you have to register that or, you know, it's not built into the tool. You're just from an endpoint. Oh yeah, yeah, yeah, yeah. I'm trying to think of, uh, what a good demo endpoint would be. We could, maybe we could do one more, uh, more test.

What is it like dog.co? Is that? Yeah, it's a good demo. Yeah. Yeah. I've used that one. Um, I mean, this, this might be, this might be kind of like the box with the word calculator. Like it might just know because it's probably been a bunch of tutorials. Oh, it's in the beginning set.

Yeah. Yeah, but, but, uh, you know what, we'll do it anyway. We'll, uh, I'll, I'll share it and, uh, we'll, we'll try. So for those who don't know, dog.co is, is one of those like, uh, demo APIs that, uh, you just set up just because it's not offensive. And, uh, yeah, exactly.

You can, you can get dog.co. Yeah. Um, I'll try and think if there's a, there's a more complicated endpoint that we could hit. Uh, maybe, maybe while I'm doing this, if you want, it gives me more ideas. Yeah. It gives me more ideas of what to promise and what, what can go in.

Uh, I definitely didn't think about hitting endpoints just because I, it's just not in any of the demos I've seen. Yeah. Uh, but it works. Um, let me see. I'll, I'll have a big button down here. Uh, show me a dog. Okay. So that's going to be our, show me a dog button.

Uh, this should be a picture of a dog. Oh, that's a great dog. No, thank you. Oh yeah, yeah. Yeah. Uh, and, um, and then we'll, we'll do some annotations here. We'll say like, uh, uh, okay. When, when this is clicked, get a new dog. Um, yeah, there's those perfect arrows coming in.

Yeah, exactly. When clicked, get a new dog from dog from, I'll just paste in this, um, and put the result in the image. Okay. So it's more of a, more of an instruction than you would normally, you know, I don't know. Yeah. One thing that it's going to have to guess is, uh, you know, the, the response format, right?

Cause it could be anything. This is true. Let's see if it works. Well, Hey, all right. Yeah. And let's see if it, if it actually hit it, hit the, uh, the right, um, the end point in the right way. So, um, dog button. Yeah. Okay. It hit the right red endpoint, Jason dog image, uh, and then put it in.

So there you go. Uh, you have yourself a, uh, JavaScript tutorial in a box ready to go. And I think like, I mean, we probably wouldn't do this on camera, but like, you can say, you know, like, like use the auth token, you know, whatever. And like, you know, go like really get, you know, real data back from this thing.

Um, yeah, there's no reason why it wouldn't be able to do that. Uh, so yeah, it's really the only, yeah, yeah. Well, not really, because again, what inside of the prompt for this, we do give it like an array of all the texts that you've put in. We say like, look, I know your vision isn't so good, or you have a hard time reading texts sometimes when it's small.

Uh, cause what are like the input that, that you get is pretty wild. It's like, it takes this as a PNG. And then it like, um, I can't do this in teal draw, but it, it resizes it. It squishes it into a 512 by 512 image or something like this.

And tiles it, yeah. And then, yeah, so it's like, um, it, the text especially can get kind of like chunked up, especially if it's small. So we, we send those strings separately, uh, so that it can kind of reassemble, um, anything that it can't read, uh, right off the bat.

Yeah, it's, it's a weird future that we've, uh, we've found ourselves in pretty cool. It's pretty cool. Yeah. I mean, you know what, one layer I automatically think of is, is backend, right? Like, uh, as someone who has, uh, works at AWS, um, I see a lot of systems diagrams, like cloud diagrams, um, entity relationship diagrams for database schema.

So I wonder if like anyone's tackled extended in this to backend. And then obviously the next level from that is, uh, full stack apps where you have backend in front of it. Yeah. Um, short answer is yes. There's someone on Twitter that was using this to generate, um, yeah, like doing like flow charts.

I'm not a backend guy, so I don't actually know exactly what the output was. Um, but I, I believe it was like a, like a configuration script for AWS, um, that was built off of this. Like, I think he just copy and pasted, uh, a diagram that he had made in teal dry anyway, and said, okay, well, let's throw this at this thing and see what it comes up with tweaking the prompt to say like you, um, rather than building single page websites, you just return the Jason, you know, description of this configuration or something like that, or return a script that would set this up.

Um, you could tweak it to say like, here are like, yeah, all the entity relationships between, um, uh, different tables or items and tables, uh, and, and give me the, give me back the SQL, you know, like, you know, initialization or something that said that would make all these tables and, um, and set up these relationships.

Uh, yeah, it's just, again, like the hard part is getting that information in, but we I don't know, pictures are really good. And yeah, if you can tell it that way, that works. Yeah. Yeah. Awesome. Um, you are also part of, uh, so you were one of two, one on what I think about, uh, multimodal, um, viral hits in November.

Uh, the other one also, you had to play a part to play in it, which is the local consistency models, uh, trends, uh, where I think you worked with Phal. Yeah. So we, uh, and actually I do have something to show here. We, we actually have a couple of things to show here.

Uh, we connected with Phal because they used, um, TealDRAW to, to create a demo for, uh, for their, their, uh, LC, LCM, right? Yeah. Latency consistency. Yeah. Um, but we took that and we, we made a draw fast, uh, TealDRAW.com, which is, uh, basically you get a, these like shapes, these little draw fast shapes.

Um, and it puts the result basically grabs that new image and puts it right next to it. And these are, um, these are extremely fast. So as I'm moving things, you should see the image, um, updating as well. Uh, and I think it, I think this was originally not a goat, you know, whatever, this is a, um, uh, uh, uh, wise princess.

I don't know. I play this more with my, uh, my daughter than anything else. I like what this looks like, you know, and, um, oh my gosh, she, she does. And actually we had a lot of, uh, a lot of folks on, uh, Twitter, um, being like, this is, this is not good.

Like, whatever. Like, cause I had a video of whatever my, my daughter drawing. And, you know, she made this awesome drawing of a mermaid and we turned it into like this really anonymous crappy version of like an illustration of a mermaid. And, uh, they're like, no, no, the children's drawing is much more interesting.

And I'm like, yeah, yeah, yeah. Come on. Who cares? Of course it is. But I, this is, uh, you know, this is, this is fun. So yeah, I think it's pretty wild. Animations, like some kind of, like you could make, uh, some kind of, it's, it's, this is almost like stop motion film.

Um, yeah, yeah. I mean, we need to do more work on consistency, but like, this is getting there. Yeah, uh, it is a car driving through a busy market marketplace. Uh, the fun is that like, you end up, uh, after playing this with this for a little while, you end up like, uh, getting really into the particularities of the input.

Like the, uh, like what can you do with a design tool? Okay. You can move things around, right. I can grab some of these and move them around. Um, and like, oh yeah, there's a highlighter here too. So we could like, we could do some highlighting, you know, that'll, that'll do stuff.

Uh, and then you kind of like, we, we couldn't help ourselves. We started making these like stories. So, uh, one thing you can do is kind of click this little button and that'll make the, the drawing and the result kind of on top of each other. All right, well then I'll move on to the other one that we, uh, that we released earlier today, which is called lens.taildraw.com.

So that was drawfast.taildraw.com. Uh, and again, this is probably not making a good podcast audio, but it is, uh, the image updates as soon as possible based on what the input drawing is. Um, and it is pretty hypnotic. So the, uh, this one's a little riskier cause it's live.

So this, this is, uh, we took a project called together, which is a vertically scrolling infinite drawing collaborative experience. Uh, a little bit like a chat room as you're drawing, everything's just sort of moving up and it just disappears off the top of the screen. Never to be seen again.

So it's kind of just fun to play with. Um, by the way, one of the most magical chat experiences I ever had was with you. Uh, I think you were like with your daughter or something and I was, I was, whatever, showing off together and you started writing, I started writing and we had chat, uh, on together.

Yeah, I was like, what is this? It's super cool. Like, you know, inevitably someone will write like, you know, where are you from? And, uh, and everyone's like chiming in and talking about, yeah, you are like, this is cool where, you know, yeah, you can chat. Um, so what I'll describe what's on the screen now, which is basically we've, we're taking like a screenshot of the center, uh, like a square out of the center of this chaotic vertically scrolling chat experience, and we're sending that to the, to the LCM and putting back the image based on like a prompt, uh, like, you know, desert scene or busy marketplace or, uh, futuristic cityscape or something like that.

And so it is updating like, you know, 10 times a second as, uh, as we go, but yeah, I would say like it's updating surprisingly quickly, like 10 times, 10 frames per second. Yeah, here we go. Yeah, yeah, uh, no, I think it's now like to 32 milliseconds basically, uh, as you go.

And, um, so if I draw like a big orange thing down here, it's going to kind of show up into the drawing. Um, maybe I'll do a big black one so you can see better. Uh, like it just sort of becomes part of the, the input to this prompt.

Um, and it is extremely hypnotic. Uh, this is again, like lens.tealdraw.com. Um, this isn't live, so no one should come on here and say hello, Steve, or anything like that, but it is, uh, it is, yeah, it's like this like slow moving, um, collaborative kind of like hallucination experience and it just never ends.

I mean, yeah, I'm probably going to be funding fell completely for the next, uh, you know, their series A or something like that. But the, uh, yeah. Are you on here, uh, Sean? I'm not, but maybe I should. Oh, yeah. Well, whatever. Yeah, I, I, I have like, I don't know.

I have a healthy respect for like the amount of processing that must be going on behind these things. Um, I just, yeah. Well, what's funny is that, uh, oh, cool. Perfect. Someone's doing that. It looks like I can't really draw the way I normally draw on Teal Draw. You, you blobbed it somehow.

Yeah, it's, everything's a little bit bigger. Um, when there's 36 people, uh, it's, it's kind of slow as well. So, uh, yeah, we, um, what's funny is that like, yeah. We're using the Cloudflare workers to do the, uh, the, the, the updates and the CRDTs to do the collaboration and, you know, all this, like whatever LCM models to, to, to, to populate this image or create this image.

Yeah. But, um, there's also a laptop in my living room right now, uh, that is doing the actual screenshotting, uh, and sending that up. And so there's a big note that I had to write, you know, for my family to say, like, don't turn off this laptop, don't close this laptop because, uh, this needs to be on in order for this thing to work.

Uh, and yeah, so, you know, we'll, no matter how good our tech stack gets, we'll always come back to, uh, some, some laptops stuck in the corner that can't possibly be turned off. Uh, it's pretty fun. Yeah. I've heard of major businesses being run that way. Yeah, exactly. There's a Raspberry Pi in the closet.

Yeah, I, you know, it's weird because it's really funny because like, you know, you, you are inventing your own art form. Uh, this is a fine art, you know, going back to your degree. Um, it's just a definitely a, uh, you know, it's funny because like the output of this, like, while it is like a visual output, the output like doesn't actually matter.

Like it's gone in 16 milliseconds and it's not coming back. Uh, and I think with all this AI stuff right now, just where we are with it and just how completely unknown it is in terms of like, where is this useful? Uh, like the best thing that you can get out of this is like the experience.

Uh, and so I think of this much more as like, you know, the thing that people will walk away from, from playing with like lens.tiltro.com, uh, should be more of like that experience of, of having interacted with this thing or interacted with it, you know, among, with others, uh, rather than like, oh, there's, it made my favorite image or something like that.

So I think the, uh, yeah, I don't know, as a, as a former image maker, like the idea of having, having like an aesthetic experience where the image is, uh, a major part, but it's, it's not necessarily like the, the important part, uh, or any one of these images isn't the important part.

I don't know. There's, there's some, like, there's some, uh, there's something, something new feeling about this kind of fun. Certainly I wish I was, uh, could do a big critique with all the new media artists, people, um, about this and about like, what, you know, where does this, where does this fit into the, sort of the, uh, yeah.

Yeah. Like other people. Well, that's, that's for them to write and, you know, for you to build. Uh, and I think that's a, that's a, yeah, you know, I would encourage you to keep building there because, uh, you're definitely not done exploring with your explorations. Um, thank you. Cool.

Yeah. Well, so I can kind of round it out by sort of looking towards the future. Uh, you hinted a little bit, uh, you're working towards TL draw 2.0. Um, so first of all, actually, so, uh, it seems like you're very focused on the core mission of canvas. Um, and the AI stuff is, is a side project for now.

Um, why, why not pursue it as like a full, why not pivot and like be an AI company? Right. Like that's, yeah, I guess a lot of those questions. Yeah. I mean, when you, when you get something as viral as, as Tiltraw got, like, I think I've talked to everyone, um, certainly every, every investor, uh, and, um, okay.

So I guess the way that, the way that, yes, we, we probably could on for something like together or that draw fast thing, uh, make a tiny little SaaS app, you know, give me $10 a month, play with this thing. And, uh, you know, it could make it, make it good.

We could go in that direction. Um, there's not much of a moat around any of this stuff. And we're seeing that just in, you know, I don't know, Gemini is going to come out in a couple of days, weeks or whatever. Um, and if it's better than people are just going to use that until the next better thing comes along.

Like, it's not, there's not a lot of like unique, uh, it's not, there's nothing really defensible about like, Hey, it's an, it's a drawing app plus an LCM like model, uh, because there's going to be a lot of those models and there's going to be a lot of drawing apps.

The, the thing that I think is really unique for teal draw, the thing that we have added that is not easily created is the canvas itself. Is that like this, uh, you know, web-based, uh, hackable, extendable, uh, super refined, um, interactions and, and all that stuff. Like, you know, all the thousand table stakes features that drive people nuts when building something like this, like they're all there, they're all good.

Day one, you could build a really great experience, uh, whether it's AI driven or not, like using teal draw, um, in a way that it's just not practical to do if you're building it yourself. And especially if you're not doing like graphic stuff, there's really not that much else out there, uh, oriented towards, towards this type of thing.

Um, and I think in a world where like these types of experiences are going to, or not these experiences, these types of AI driven capabilities are just going to keep coming out faster and faster. And like, you know, I don't know, next, next year is going to be wild.

Like every month there's going to be some new, uh, you know, capability or something. The, the thing that I would want to see both just me as a, as a person and as me as having built the business that I've built, uh, is for teal draw to sort of become the, the place where some of this prompting, some of these ideas are explored.

Um, even if we decided to, okay, we're just going to close everything up. We're going to build a product, but based on this, and maybe it's a great product, but it's, it would only be one kind of one direction, one ray kind of into this infinite space of, of, of possibility.

Uh, and that, that could be successful, good, but like the idea of building the, um, I mean, we've built the, the sort of the direct manipulation manipulation core, but there are so many, even like AI specific APIs that we could build around teal draw for having, you know, like a virtual collaborator, uh, or, or, or working with images in a, in a more, more rich way.

Like there's just so much that we could build in order to make this the best possible place to explore, not just one direction, but like, you know, many, many, many directions. Um, and I think that, that narrative gets me out of bed in the morning in a much more, uh, that, that gets me much more excited.

Um, and I think we're also just like the team that we have and the, the, the tech that we have and the skills that we have, we're more of the team to build that rather than like, to, um, become like a SAS product, uh, company. I'm not saying we'll never do like a, you know, a past 10 bucks a month and we can, you can play with our magic toy.

Um, but, uh, primarily my goal is to make, uh, to draw the, either the place to explore these different models, or you might, you might think of it as like, um, the battleground on which the winners will be kind of identified. Um, like right now we're using open AI for the make real thing.

Um, maybe next week we'll be using a Gemini and now it's, now it's a question of, okay, well now we have an environment in which to compare these two models, um, with the same input and a very advanced form of input. Uh, but yeah, like, let's see which one does better now.

Nothing would make me happier than to be at sort of like the, the, the battlefield, um, for multimodal prompting and multimodal, uh, uh, AI experience. I should also shout out Baklava as I want the open source, um, vision and, uh, multimodal model. Um, yeah, I mean, like, uh, so, so I fully understand you want to, you want to own the light cone of multimodal prompting.

Uh, I think that that'll probably be the title of the episode. Um, what's, uh, what's coming up for TealDRAW 2.0. So really the TealDRAW that you are using now and that I'm using are, is basically 2.0. It's been in pre-release for a long time. Uh, really the only change that's going to happen once we launch it is, uh, we're going to start selling commercial licenses for it.

Uh, so if you are using TealDRAW in a commercial product or if you want to, then, um, you know, if it's, if you're funded or if you have revenue, then, you know, you'll buy a license and I'll, um, add you to our, our special list of customers. Um, so yeah, it's mostly just go to market and the necessary changes around there.

There will be some kind of fun changes, secret saucy changes that, that launch, but nothing substantial, nothing breaking. Uh, we've put a lot of effort in the last, like, it's crazy that we've only had an open source since May of this year. Uh, this, this new version, right. And yeah, it's, we've been very busy since then, but it is, uh, it's stable.

It's robust. We put it through, you know, a lot of usage and caught a lot of the issues. So it's absolutely ready to go. Um, but I have, uh, one or two conversations with my lawyer before we, uh, we turn, turn over the license and start, start moving it that way.

Gotcha. Um, and then, uh, maybe, uh, I think if I could get your commentary, uh, before we close on, on, um, just the, the competition out there, like, um, uh, so you are not, you are not the only sort of canvas tool. I, I think I get now that I was going to ask about like Figma, FigJam, and they have some AI thing that they're also doing.

Adobe is also working on similar things. Canvas also working on similar things, but they're all individual point solutions. Whereas you're more of the open source, uh, uh, canvas to power all of them. Um, yeah, I feel like it's just Excalidraw. That's like the other alternative, uh, that remains. I think Excalidraw, and I like Excalidraw a lot.

I, I contributed there and, you know, we, we retweet each other and tease each other on Twitter and stuff. And, uh, early on I was copying features from them. Now they're copying features from me. So I, uh, but no, it's, uh, it's like the collaboration space is so, has so many dominant players like, um, that I, uh, I think me and Excalidraw are tiny within that.

Um, yeah, well, I, I, there's two things. One is that we made this very strange bet on using a kind of a web canvas that our canvas is not like an HTML element or HTML canvas element. It's like normal React components all the way down. So if you wanted to add something interactive and have that participate in the sort of space of the canvas, uh, the way that we were doing our, our, um, iframes and kind of like being able to write on top of an iframe, you can't do that in Excalidraw.

You can't do that anywhere. Um, that is like a very strange tech choice that we made around, uh, Tealdraw that is, you know, finding its home in, uh, in a few different ways. Most of the people who pick Tealdraw and approach me like the inbound that I get are folks for whom that's like the killer feature, um, to be able to, to put interactive widgets on the canvas using just React.

Uh, the, yeah, I, I was kind of repeat the same point that you, you, you kind of suggested, which has said like, no matter how good Figma's, uh, like AI solution is, and I hope it's great because I love Figma and I use it. Um, it's not going to solve every possible problem in this space.

It's not even going to like touch, you know, like, like you can't like none of these things. And I mean, I already had identified like, okay, uh, there was a point where like any Kanban board was like, uh, uh, was Trello, right? When you, when you talked about Kanban boards, you were talking about Trello.

Kanban boards are in every productivity app now. Uh, I think the same thing is going to happen with collaborative whiteboards. It's like people like them. Uh, I'm making it easy. Uh, people are already doing it even without Tealdraw when it's hard. Like, uh, like, yeah, that's going to become a kind of a commodity user experience and a lot of different products.

Um, probably, you know, give me a diagram from a text prompt. Like, yeah, that is probably going to be a commodity too. Give me an image from a text prompt. Like, yeah, that's just going to be everywhere. We're just going to assume that that's, you know, it's like adding a gift to a, to a chat or something that there's no moat there.

Um, I do hope that Figma has an amazing AI integration, but I think the thing that it will help you do is use Figma. Um, like, like generating an image won't be super useful, but like generating it, uh, extended and autocomplete this, uh, this, uh, design absolutely would be.

And, and I hope that they launch something amazing there. Uh, but yeah, there's, uh, I, like I said, there's just a million different directions that this stuff could go in. Uh, the canvas is just like a input device. It's, it allows a certain type of user experience. Um, and that's certainly not limited to design.

That's not limited to whiteboarding. It's not limited to collaboration or anything like that. So, um, yeah, my hope is that there are those like kind of 10,000 products that, that could be made with what we're making. Uh, yeah. Yeah. That's, that's a really, uh, great mission. And, uh, I, I see why you're so passionate about it.

You're the right team for it. Um, okay. Uh, uh, you know, a couple of lightning round questions. Um, uh, one, which is like, if you had some AI capability that you would wish for that you don't have yet, what would it be? Oh, that's a really good question. Uh, helps people to do some research, I think probably related to, it's not quite a CRM, but like a human, uh, just normal relationship management.

Um, this is something that I've never had a problem with until, uh, until I had a startup actually, where there's just a lot more people involved in my life. And it's, uh, um, it's hard to keep up with them all. And I think this is probably something that like an EA like kind of does of saying like, Hey, there's a birthday coming up or something like that.

Um, but also just, you know, identifying opportunities to, to work together, uh, to, to connect or, you know, who's an expert on this thing that I'm working on. Like that doesn't always occur to me. Um, and I think there could be, I don't know. I think the, the, the value of your network, um, that even if you're good at that, you're probably only scratching the surface of like, you know, how, how you could be helping the people around you and how they could be helping you based on like the specific context of like what you're working on and the problems on your table today.

Yeah. I've also wanted to build a CRM on top of Twitter, uh, because you have all the info there, but what people are working on your past conversations with each other, um, and your shared interests, you know, like it should be able to search, you should be like a bare minimum to search it, but to proactively suggest is, uh, the next layer.

Um, and I guess AI chief of staff, AI executive assistant, you know, something like that. Yeah. Yeah. Yeah. Um, I think like some people are working on that, but the problem is so big that they're, they're working on like the automation piece. So like Lindy, I had at my conference, um, where they're, they're like, it's, it's, it's a virtual assistant that you can trigger on your desktop or via email.

Um, and it's, it mostly deals with scheduling, um, but also it helps you do a little bit of research. Um, so that, yeah, I think the agents field will progress there. We might take 10 years to do it. Yeah, I can wait. It's all good. And then, and then finally, um, uh, invite your founders, like what, what has helped you the most, uh, as, as a founder, you know, you're two years into your journey.

Yeah. I think the, uh, so this, this kind of comes a little bit out of, uh, like what you learn in art school, uh, type of thing. Um, but yeah, uh, but one thing is that, that basically like when you're a studio artist or you're in a studio or whatever, there's no external constraints.

You just kind of are running on, well, what do I feel like working on? Uh, and the further you get away from like, what do I feel like working on kind of like the worse your work becomes. So having like a really good, uh, feeling for that sort of desire and being able to respect and follow that, that desire as like, uh, um, because it's not arbitrary.

Is that like, if, if you really, really feel like working on, um, uh, on, on a thing, like it, that might be the sort of the, the tip of a very complex, uh, iceberg of analysis of like the field or like what people are talking about or, or something, uh, that you, directions and market or something like that.

Like, I don't know, I think with, with Tealdraw and with, as, as a founder on this, um, the thing that I've tried to do and I've tried to preserve is like the, uh, being able to prioritize based on like, what is most interesting right now. Uh, and that is, that is true for, uh, what code we write and like what features we work on.

That's true for like which partners we, you know, we spend time with in terms of who's using Tealdraw, uh, the types of problems that we want to solve, like using your own sort of sense of what's interesting as a filter, uh, and what you want to work on, like what sounds like a fun thing to work on right now as a filter is not, it's not naive.

Um, and it can be kind of part of your, your secret sauce. And I think it's, uh, I think a lot of early founders are encouraged against that and to, to be working backwards from a certain outcome and all that. Um, and yeah, you, you do have to have to do that.

You have to put that into the, um, into the mix as well, but, um, you know, be sure that you're, you're picking the best parts, um, out of that mix. I don't know the parts that you want to work on. Yeah. Well, I mean, what's the point of doing this if you don't have, uh, have some fun, indulge in curiosity.

The worst case you'll, you'll build something that you love. Yeah, exactly. Good things can come out. Good things can absolutely come out of like, uh, you had an 8,000% increase in your followers or something. Should I put this on camera? I'll, I'll share my screen. We'll look at my, uh, my Twitter analytics.

It's on your sub stack. Yeah. Oh yeah, it is. It is. I need to update that image. I've done it once already. Uh, yeah, if you, if you're a, if you're a sub stack reader, the teal draw sub stack 72 hours into this big make real virality explosion, I, I sat down and wrote a blog post and I, uh, I wanted to at least capture that, that vibe, um, of what it felt like in the middle of that, that hurricane.

Uh, yeah, so it's, it's a pretty fun one. It's good to read back. Uh, well, I'm sure it's not the last time we'll see you do something crazy viral. Uh, I'm sure that a lot of people will be exploring to draw. I hope a lot of people, honestly, um, one thing I'm thinking about is like embedding tail draw into my, my input box.

Like why can't, I can't see our job be like, you know, part of the, the input. Hey, I'm, I'm talking to, uh, the, the good folks over at open AI tomorrow. Fingers crossed. Maybe we, uh, maybe we get it in, inside of a chat GPT or something. Cause yeah.

Like, I don't know. I want to, I need to take a drawing or take a photo and then annotate it or like, you know, just sketch something out. You should be able to do this. Yeah, exactly. Yeah. It's just a good, it's just a good thing. Uh, yeah. The, the people cry out for it.

I, uh, failed it fast enough. Yeah. Um, well, thank you for inspiring the rest of us. Uh, thank you for everything. And I'm sure we'll, we'll hear from more from you over the, over the next few years. Uh, so, uh, thanks. Thanks for your time. Awesome. Thank you for your time.

Bye.