My name's Steve, Steve Ruiz. I am from a company that I started called TealDraw. TealDraw started as a, well a couple things, started as like a digital ink library that then Christopher had me implement in Excalibur when I was working on that I was like you know there should probably be like a kind of a really good SDK for building these types of things.
And I'd already done a couple of projects that were kind of going in that direction so I did. It turned out if you build a canvas that other people can use people will build cool stuff with it. So today I'm going to be talking about some of the stuff that we've done with AI using our kind of our own toys playing with our own canvas here.
So I'm here in TealDraw.com, it's a free whiteboard. You can come in, use it, make your diagrams, make your slides. Very, very similar in use case to Excalibur actually. But there are a few things that are kind of special here and I'll show you real quick. So again this is TealDraw.com, free end user whiteboard application.
And we also have TealDraw.dev which is the SDK website. If you wanted to build stuff with TealDraw then you could go to TealDraw.dev and learn all about the the code and the documentation and how to do that. The cool thing about the canvas is that it is. Well, I'll skip this one for a second.
It is just normal web stuff. It's like react all the way down. So for example, I can do like things like, you know, play YouTube videos and, you know, still interact with them, still draw on top of them. But the yeah, every one of these little shapes including like doing some pretty cool stuff like, you know, I have a whole code editor here.
This is just code sandbox that's embedded in TealDraw.com. This one is Figma that like just is embedded in TealDraw.com. Even if you really like Excalibur, you can even use Excalibur inside of TealDraw.com. So, and I'm pretty sure, I hope this doesn't break my slides, but if I paste the own, the TealDraw inside of itself, then we can, we can kind of, let me see if I can draw inside of TealDraw.
Hang on a second. Yeah, right. We're kind of modifying the inner from the outer, whatever. I'll let you think about how that works. But yeah, and it has a lot of like kind of little details. I'll do this really quick of like, you know, nice arrows that just, you know, perfectly kind of follow the different shapes of things and, you know, boxes where the, you know, the corners of the boxes always stay in the corners.
That's right. So that's part of our value propositions that we take care of all these little like little details, make sure the corners are right, make sure the arrows are right, stuff like that. We did a couple of different AI stuff on top of this. And some of these are going to work.
Some of these are not going to work. Did we find out is this is this fell in the room here? The okay, well, in 2023, we had a lot of success with with make real, I'll skip this one for now. Make real was the idea that people were using TealDraw for whiteboarding, as well as what you're like drawing wireframes.
And the idea would be like, well, what if we could take the diagrams that we were drawing the wireframes that we were drawing, and we could just kind of kind of make them real, right? What would be involved in that? And so we when the when the vision models came out like GPT-4 with vision.
That's annoying. I'll have to do it myself. We, we realized you could just send a screenshot to continue, boom, to the model and say, hey, model, you're a web developer, your designers just gave you this lo-fi thing. Can you can you create a higher like, can you actually prototype this?
Can you build it? And the models could do that really well. As usual, I'm going to kind of like, give this a second to load while we are all right, they're really running with this input. All right, well, the models have since become very, very ambitious. Here's another good one.
Let's say I want to have a stop motion application where like, I have a feed from my camera. And I want to be able to take pictures. And I want to be able to like, see all those pictures there. But I also want to be able to play them like in series.
Using only the the input here, right? I won't even do the title, just just to be to be fun. The model will will spin off on that. And it will eventually, we can kind of watch it generate, but it will eventually come up with this. I just did this during the last talk, where that's my app, you know, I can kind of do this is doing the onion skinning.
And there's my gift, right? And not surprising. I mean, you can add images to cursor and stuff like that. And it just works really well. But the fun part is because I'm going to stop this. Because this is back on the canvas, you can actually annotate on top of the website, and use that as the next prompt, you kind of click this kind of kind of just generate the next one.
And I've done this already. But you can see that yeah, sure enough, it made the button solid, like I asked it to. And so using these drawing tools as a way of not only generating stuff, but annotating and like kind of iterating through these, you can get some pretty, pretty wild results.
This came out at the end of 2023. It was one of the first kind of tools that let people that couldn't program and couldn't create software to kind of do it. And it was, it was pretty remarkable. So like this, this being the input, leads to, you know, leads to an app.
You might have seen there the, that it just did a little flash of green, you know, there was just a bug involved. So I just took a screenshot of the bug and sent it together with the, the original source and said, hey, can you, can you fix that particular bug?
And yeah, it did. So it's, it's pretty cool. The, um, if I don't crash my browser. Hey, all right, so that's make real. We also did this one called draw fast, which may or may not work. I'm just gonna see if it does. This used a thing called like latent consistency models.
I think that's the name basically like a, uh, create an image for me as fast as possible. And we will see if I can wake up the, uh, the server here. Oh, Hey, look, Hey, this normally doesn't work special, uh, where you have a drawing, you have an image being created from the drawing.
And I, as I changed the drawing, Oh, come on, do it. Uh, then the image is gonna, gonna change as well. Um, you can even take these things and flatten them like this. And now I can interact with the, um, the model, images like this and, you know, let's say I'm gonna rotate it or, or maybe stretch it out really big.
Uh, uh, and in, in, in, in good circumstances, this stuff works almost in real time, but you'll have to, uh, I have to accept the, uh, uh, whatever, uh, the, the one, one moment of, uh, of working as the best we're gonna get. I'm gonna need to use two hands to do this, but no.
If I just make a whole bunch of people, will they? Oh, cause they, they're running cause they're all sideways. Right? I got it. Anyway, this is draw fast. Uh, but the one that I'm going to talk about, uh, mainly is teal draw computer. So this is how, well, I'll just, I'll just do it.
This is a kind of a graph full of these other little components. Uh, I am gonna say, uh, AI engineer, um, MCP observability, I don't know, whatever, uh, uh, uh, conference. Uh, and I'm gonna draw a picture too of like maybe a, uh, I'll just do it like a big, um, uh, top hat or something like that.
I don't know, whatever, with some playing cards. And the, and the brim. Got it. Write a short commercial is the instruction here. Uh, I'll even do please, uh, and run it. Okay. So a couple of things are gonna happen all at once here. This graph is gonna execute. Right now, the instruction is creating a script for itself.
And then it just executed the script. Sorry, this goes fast. Wrote the text. Now it's generating speech. It's also generating an image based on this. Um, each one of these blocks accepts inputs and produces outputs. So this image is based on, uh, our, our, our text, which was based on this, this instruction, which was based on these inputs.
Uh, and then it's, it's, you know, creating speech right now. Um, and that's gonna be whatever. You got it. And then I can, I can keep piping it on and it'll, you know, but this time it'll make it sad and serious and create an image based on that. Right.
So we, this is cool. Um, the, each one of these things, like I said, has this script of like, how should I use my inputs? What should I produce based on my inputs? So for this write a short commercial, it's something like it's tiny. I'll, I'll read it. Analyze inputs looking for guidance on the product services style or other requirements for the commercial based on the inputs, write the text for a short commercial script, output the result.
Right. And it'll repeat those same instructions based on whatever I get it. And it'll, it'll pipe it out in the same sort of data that, uh, um, is acceptable as inputs by the next, next thing down the line. Um, we did this in, uh, collaboration with Google. They came to me and said, Hey, we have Gemini 2 coming out.
We want to launch with a bunch of cool demos and a bunch of cool partners. Um, do you want to have, uh, you know, be a part of that? I'm like, awesome. Does that mean we get early access to the new models? You know, these, um, they had shown the, you know, using your phone and kind of like, you know, where did I leave my keys and all that type of real time stuff.
And they're like, no, I'm like, all right. I'm like, do I anything? No, no, you got to work with what you got. So, uh, cool. All right. We'll do that. The, uh, and so we did, um, you know, Gemini 1.5 was out. That's pretty cool. But also Gemini flash was out and flash was fast and pretty good and multimodal.
So that was the kind of the inspiration for this as we worked on it more. That's, that's good. That's good. Sad and serious AI engineer conference. Um, yeah, it's good stuff. As we worked on this more, we realized that like you could, you could kind of do computer stuff with it.
You could kind of like take a, uh, an instruction and say something like, like increments or like, I'll do this, like add up, uh, all your inputs and then you give it some inputs, uh, like, um, you know, whatever, uh, two and, uh, it's hard to do this, you know, 11 and it will come up with, uh, like you, like you kind of expect it.
It'll, it'll come up with whatever 13, but the execution here is not being done in code. The execution is being done by a language model. Languages models are capable of this kind of like nonlinear thinking. Um, so if I give it two and, uh, octopus, um, as the inputs and asked it to add that up, um, well, that octopus is not a number, but if you forced me to, which we do in the prompt, uh, infer a number from whatever, you know, maybe, maybe, maybe it's eight and, and eight and two should make 10 and there you go.
Right. And you know, if it, if it was a, uh, you know, a camera feed, and it is me, I'm going to try and do this, uh, hold on a second, um, four, you know, like, is it going to be 14? Maybe? Yeah, there we go. Right. So like, it's, it's able to use, thank you.
Yeah. And it's not, it shouldn't be that like surprising, you know, it's, it's just multimodal model, take a bunch of inputs, uh, produce outputs. Um, we, we kind of went further with this. I'm going to have to jump to, and by the way, the, the killer use case for this, if it's not immediately obvious is turning your daughter's, uh, drawings and stuff into, uh, pictures and stories and piping them all around.
Right. Uh, but the, um, where's this one? This is a good one. You can also do, I was playing a lot of Factorio at the moment as well. Um, oh, no, this is the wrong one. Hang on. All the way down at the bottom. And grab this one. And so, uh, the idea of having these, these machines that even include cycles and loops and that'll just operate forever.
Um, so in this one, it comes up with a random pop song, adds it to a list, feeds it back in so it doesn't repeat, asks, is this song about love? Uh, and then sorts it according to, well, again, we're working with language models. So we have a Boolean value of yes, no, or maybe.
Um, so we have, uh, and, and it, it feeds back around and it, it, it kind of pipes and I can just leave this forever just spending my, my, uh, the credits that Google gave me to, uh, to burn. Um, and yeah, it, this is, this is really fun.
Teal Draw Computer, it, it got pretty popular. Not, not necessarily as popular as the, uh, Make Real, but it was, uh, it's, it's pretty amazing what you can do and it really rewards creativity to put it lightly. Um, I have seen people using this to, to do actual multi-stage, prompting, um, you know, decision-making analysis and you can imagine this being asynchronous and somewhere up in the cloud and maybe that's, that's what we do next where we say, take this CSV of, uh, email addresses of people who've engaged with our products, email them all, get a response, do sentiment analysis if they'd like it, you know, do something next, uh, you know, so forth, wait for me, you know, text me and say, like, should I really email this person again?
Maybe I say yes. So having a big, long, long-lived asynchronous process, um, that could be run in parallel. This would be a great, great interface for, for designing that and everyone seems to get this. Um, when we, when we originally, uh, did this, the creative prompt, the, the kind of the, the philosophy of this project before we, I went home and prototyped, it was like, I want a computer that works the way that I thought a computer worked before I knew how a computer works, right?
Maybe we just have like, I want this stuff and I want to do this to it and then I want to take the results and go over here. So that's, uh, that's Teal Draw Computer. Um, I wasn't going to show Teach, but I will show Teach, uh, which is, uh, create a flow chart that begins with AI and ends with engineer.
Incorporate existing shapes. Um, when you have a really cool hackable canvas, like an SDK for canvas with like a runtime API, um, it, it plays really, really well with other AI tools. Um, and you can really quickly, even though these models aren't like great at this, um, you can really get it to work with the canvas in a way that is, um, kind of like a virtual collaborator, like you can kind of get it to do stuff.
I mean, the demo that I always show and then I'll do really quick is the whole, like, you know, draw a cat. Um, somewhere on this page is a, you know, a pelican riding a unicycle, but I, uh, um, there's a lot of stupid drawings here. Uh, but yeah, draw a cat and it'll draw a cat, but you know, it's, it's doing this stuff not as a, as an image.
It's not painting pixels in the way that like Midjourney would. It's, it's doing it as text. It's like kind of returning a structure that I can map into, to, to shapes on the canvas. And so, you know, I can, I can work with them myself. I can correct it.
Um, and it can, it can work with my stuff as well. So if I do like, uh, this, it's like orange, uh, and I say, uh, make the cat, the cat blow out the candle. I didn't tell it was, it was a candle, but, uh, let's see if I can do it.
Um, I don't think cats can actually blow, but I don't know that for sure. Uh, uh, uh, this is using Claude as, as the back end. If you want to know how this works, definitely catch me up afterwards. Um, yeah, right on. Hey, and we get smoke as well.
This is wonderful, right? Uh, that, uh, Tealdra, a lot of this stuff, and, and, in fact, I would say our, uh, our advantage over the bigger companies in this space is that, uh, shitty but amazing is definitely on brand for Tealdra. Uh, and yeah, if, if this seems like a good problem that you might want to work on, definitely talk to me because we have some tools that make it easier.
Um, people build all sorts of crazy stuff with Tealdra. Um, this is, uh, Grant Cots liquid, you know, the, uh, simulation that's using Tealdra is like the, the geometric, physical, and, you know, like control layer, I don't know, authoring layer on top of it. Um, companies build really cool stuff with Tealdra, like observables.
It's building with Tealdra now. It's, it's incredible. Um, I think we're only, not even scratching the surface, uh, of what can be done with this paradigm and these tools. Please build something amazing. Uh, I got the canvas. We have the technology. So, that's my talk. Uh, thank you very much.