Back to Index

The State of AI in production — with David Hsu of Retool


Chapters

0:0 Introduction
2:43 Retool's founding story and decision not to present at YC demo day initially
5:40 Experience in YC with Sam Altman
9:27 Importance of hiring former founders early on
10:43 Philosophy on fundraising - raising less money at lower valuations
14:57 Overview of what Retool is
18:9 Origin story of Retool AI product
23:4 Decision to use open source vector database PG Vector
24:42 Most underrated Retool feature loved by customers
28:7 Retool's AI UX and workflows
34:5 Zapier vs Retool
37:41 Updates from Retool's 2023 State of AI survey
40:34 Who is adopting AI first?
43:43 Evolving engineering hiring practices in the age of Copilot/ChatGPT
45:52 Retool's views on internal vs external AI adoption
48:29 OSS models vs OpenAI in production
50:54 Additional survey questions to ask in 2024
52:7 Growing interest in multimodal AI capabilities
54:0 Balancing enterprise sales vs bottom-up adoption
60:0 Philosophical thoughts on AGI and intentionality

Transcript

Hey everyone, welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Residence at Decibel Partners, and I'm joined by my co-host Swix, founder of Small.ai. Hi, and today we are in the studio with David Hsu from Reto. Welcome. Thanks, excited to be here. We'd like to give a little bit of intro from what little we can get about you and then have you talk about something personal.

You got your degree in philosophy and CS from Oxford. I wasn't aware that they did like double degrees. Is that what you thought? It's actually a single degree, actually, which is really cool. So basically, yeah, so you study content, you study philosophy, and you study the intersection. The intersection is basically AI, actually, and sort of computers think, or computers be smart.

Like, you know, what does it mean for a computer to be smart? As well as logic, which is also another intersection, which is really fun too. In Stanford, you know, it might be symbolic systems or whatever. And it's always hard to classify these things when we don't really have a word for it.

Now, like this, everything's just called AI. Five years ago, you launched Retool. You were in YC '17, winter '17, and, you know, it's just been a straight line up from there, right? I wished. And that's your sort of brief bio that I think you want most people to know.

What's something on your LinkedIn that people should know about you, maybe on a personal hobby or, you know, let's just say something you're very passionate about that might not be about Retool? I read quite a bit. I probably read like two books a week, around about, so it's a lot of fun.

I love biking. It's also quite a bit of fun, so, yeah. Do you use Retool to read? Like, what the hell? No, I don't use Retool to read, so that'd be funny. What do you read? How do you choose what you read? Any recommendations that you just fall in love with?

It's pretty diverse. I'm mostly reading fiction nowadays, so fiction's a lot of fun. I think it maybe helps me be more empathetic, if you will. I think it's a lot of fun, actually, to sort of see what it's like to be in someone else's shoes, so that's a lot of fun.

Besides that, I'm really good about philosophy as well. I find philosophy just so interesting, especially logic. We can talk more about that for probably hours if you want, so. Yeah, I have a sort of casual interest in epistemology, and I think that any time you try to think about machine learning on a philosophical angle, you have to start wrestling with these very fundamental questions about how do you know what you know.

Yeah, totally. What does it mean to know? What does it mean to know? Yeah, all right, so over to you. That's its own podcast. We should do a special edition about it, but that's fun. Let's just maybe jump through a couple things on Retool that I found out while researching your background.

So you did YC, but you didn't present at Demo Day initially because you were too embarrassed of what you had built. Can you maybe give any learnings to founders on jumping back from that? I've seen a lot of people kind of give up early on because they were like, "Oh, this isn't really what I thought it was going to be to be a founder." They told me I would go to YC and then present and then raise a bunch of money, and then everything was going to be easy.

So how did that influence also how you build Retool today in terms of picking ideas and deciding when to give up on it? Yeah, let's see. So this is around 2017 or so. So we were supposed to present at the March Demo Day, but then we basically felt like we had nothing really going on.

We had no traction, we had no customers. And so we were like, "Okay, well, why don't we take six months to go find all that before presenting?" Part of that, to be honest, was I think there's a lot of noise around Demo Day, around startups in general, especially because there's so many startups nowadays.

And I guess for me, I'd always want to sort of under-promise and over-deliver, if you will. And on Demo Day, I mean, maybe you two have seen a lot of the videos, it's a lot of honestly over-promising and under-delivering because every startup says, "Well, I'm going to be the next Google or something." And then you peer under it and you're like, "Wow, nothing's going on here," basically.

So I really didn't want that. And so we chose actually not to present at Demo Day, mostly because we felt like we didn't have anything substantial underneath. Although actually a few other founders in our batch probably would have chosen to present in that situation, but we were just kind of embarrassed about it.

And so we basically took six months to just say, "Okay, well, how do we get customers?" And we're not presenting until we have a product that we're proud of and customers that we're proud of. And fortunately, it worked out. Six months later, we did have that. So I don't know if there's much to learn from this situation besides I think social validation was something that I personally had never really been that interested in.

And so it was definitely hard because it's hard to sort of, it's almost like you go to college and all your friends are graduating when you failed or something, you failed the final and you have to like redo it here. It's like, well, it kind of sucks that all your friends are up there and on the podium presenting and they are raising a ton of money and you're kind of being left behind.

But in our case, we felt like it was a choice and we could have presented if we really wanted to, but we would not have been proud of the outcome or proud of what we were presenting. And for us, it was more important to be true to ourselves, if you will, and show something that we're actually proud of rather than just raise some money and then shut the company down after two years.

- Yeah, any sad moment stories from the YC days? Could you tell in 2017 that Sam was going to become, like run the biggest AI company in the world? - Wow, no one's asked me that before. Let me think. Sam was, I think he was, I forgot, maybe president of YC in our batch.

We actually weren't in his group actually at the very beginning. And then we got moved to a different group. I don't honestly have, I think Sam was clearly very ambitious when we first met him. I think he was very helpful and sort of wanted to help founders. But besides that, I mean, I think we were so overwhelmed by the fact that we had to go build a startup and we were not honestly paying that much attention to every YC partner and taking notes on them.

- That makes sense. Well, and then just to wrap some of the Ritual history nuggets, you raised a serious A when you were at 1 million of revenue with only three or four people. How did you make that happen? Any learnings on keeping teams small? I think there's a lot of overhiring we've seen over the last few years.

I think a lot of AI startups now are kind of like raising very large rounds and maybe don't know what to do with the capital, so. - Yeah. So this is kind of similar actually from sort of why we chose not to present a demo day. And the reason was it feels like a lot of people are really playing startup.

I think PG has an essay about this, which is like, you're almost like playing house or something like that. Like, it's like, oh, well, I hear that in a startup, we're supposed to raise money and then hire people. And so therefore you go and do that. And you're supposed to do a lot of PR because that's what startup founders do.

And so you could do a lot of PR and stuff like that. And for us, we always thought that the point of starting a startup is basically we have to create value for customers. If you're not creating value for customers, everything else is gonna, nothing's gonna work basically. You can't continue to raise money or hire people if you don't have customers, you're not gonna work value.

And so for us, we were always very focused on that. And so that's initially where we started. I think it's, again, maybe it goes to like the sort of, you know, presenting something truthful about yourself or staying true to yourself, something to that effect, which is, we didn't want to pretend like we had a, you know, thriving business, we could actually.

And so the only way to not pretend was actually to build a thriving business. And so we basically just, you know, put our heads down and, you know, grind it away for probably a year, year and a half or so. Just writing code, talking to customers. And I think that at that point, we had raised something like maybe a million dollars, maybe a million and a half, something out of YC.

So, I mean, to us, to people, you know, that was a huge amount of money. I was like, wow, like how are we ever gonna spend a million and a half? Our run really was like, you know, five, six years at that point, right? 'Cause we're paying ourselves 30, 40K a year.

And so then the question was not like, oh, we're gonna run on runways. The question was like, we better find traction because if we don't find traction, we're gonna, you know, just give up psychologically. Because, you know, we're grinding on it. If you grind on the idea for four years, nothing happens.

You're probably psychologically gonna give up. I think that's actually true in most startups. Actually, it's like most startups die in the early stages, not because you run out of money, but really because you run out of motivation. And for us, had we hired people, I think it would have actually been harder for us because we would have ran out of motivation faster.

Because when you're pretty part of market fit, actually, trying to lead the team with like, you know, 10 people, for example, to Marshall's part of market fit, I think it's actually pretty hard. Like it's, you know, everyday people are asking you, so why are we doing this? And you're like, I don't know, man.

Like, hey, trust us. And that's actually a very tiring environment to be in. Whereas if it's just like, you know, the founders figuring out product market fit, I think that's actually a much sort of safer path, if you will. You're also schooling less with employees. Like when you hire employees, you have an idea.

You have front of market, you have customers. That's actually, I think, a lot more stable of a place for employees to join as well, so. - Yeah, and I find that, you know, typically the sort of founder employee relationship is, you know, employee expects the founder to just tell them what to do.

And you don't really get critical pushback from the employee, even if they're a body, and even if they like you as an early engineer. It's very much like the role play of like, once you have that founder hat on, you think differently, you act differently, and you're more scrappy, I guess, in trying to figure out what that product is.

Yeah, I really resonate with this 'cause I'm going through this right now. (laughing) - That's awesome. One thing we did actually early on that I think has paid a lot of dividends, especially, you know, which was a lot larger now, is we hired a lot of former founders. So I want to say like, when we were, I don't know, 20, 30, 40 people, we were probably like half former founders at each one of those stages.

And that was actually pretty cool because I think you infuse sort of a, you know, get things done kind of culture, a outcome-oriented culture of like a very little politics, 'cause, you know, no one came from larger companies. Everyone was just like, "This is my own startup. "Let me go figure out "how to achieve the perception for the customer." And so I think from a cultural perspective even today, a lot of those cultures are sort of very self-startery.

I think it's actually because of sort of these like, you know, early founders that we hired, which was really, really, you know, we're really lucky to have them, so. - Yeah, and then closing off on just a little bit of the fundraising stuff, something notable that you did was when, in 2021, when it was the sort of peak Zerp and everyone was raising hundreds and hundreds of millions of dollars, you, you know, you intentionally raised less money at lower valuations, is your title.

And I think it's a testament to your just overall general philosophy in building Retool that you're just very efficient and you do things from first principles. Yeah, I mean, like any updates on like, would you still endorse that? You know, would you recommend that to everyone else? What are your feelings sort of two years on from that?

- Yeah, so on a high level, yeah, so exactly, you said it's correct, where we raised less money at a lower valuation. And I think the funny thing about this is that when we first announced that, even, you know, internally and both externally, I think people were really surprised actually, because I think Silicon Valley has been conditioned to think, oh, raising a giant sum of money at a giant valuation is a really good thing.

It's like, you know, you should maximize both the numbers basically. But actually, maximizing both the numbers is actually really bad, actually for the people that matter the most, you know, i.e. your employees or your team. And the reason for that is, more, raising more money means more dilutions. If you look at, you know, a company like Uber, for example, if you join Uber at like, I don't know, like a $10 billion valuation, or, you know, let's say you joined before their huge round, which I think happened at a few billion dollars in valuation, they actually got diluted a ton when Uber fundraised.

So if, you know, Uber raises, if Uber dilutes themselves by 10%, for example, let's say they raise 500 to 5 billion, for example, I think employee's stake goes down by 10% in terms of ownership. Same with, you know, previous investors, same with the founders, et cetera. And so if you look at actually a lot of founders in sort of, you know, the operations statistics space, or, you know, those that fundraise like, you know, 2013, 2017, a lot of the founders by IPO only have a few percentage points, actually, for a company.

And if the founders only have a few percentage points, you can imagine how, you know, how little employees have. And so that I think is actually just a really, you know, bad thing for employees overall. Secondly, it's a sort of higher valuation given the same company quality is always worse.

And so basically what that means is if you are fundraising as a company, you could command a certain valuation in the market. You know, let's say it's, you know, X, for example. Maybe you get lucky and you can raise two times X, for example. But if you choose two times X, your company itself is not fundamentally changed.

It's just that, you know, for some reason, investors want to pay more for it. You know, maybe today you're an AI company, for example. And so investors are really excited about AI and want to pay more for it. However, that might not be true in a year or two years' time, actually.

And if that's not true in two years' time, then you're in big trouble, actually. And so now I think you see a lot of companies that are raising really high valuations about 2021. And now they're like, man, we're at like 100 X, or, you know, we raised 300 X multiple, for example.

And if we're at 300 X then, you know, maybe now we're at like 200 X. And like, man, we just can't raise money ever again. Like, you know, we're gonna have to grow like 50 X to go raise money, you know, at a reasonable valuation, let's say. And so I think that is really challenging and really demotivating for the team.

And so I think a lower valuation actually is much better. And so for us, in retrospect, you know, to answer your question, two years later, we did not predict, you know, the crash, if you will. But given it, I think we've done extremely well, mostly because our valuation is not sky high.

Because if our valuation were sky high, I think we'd have a lot more problems. We'd probably have recruiting problems, for example. We'd probably have a lot of internal morale problems, et cetera. A lot of people would be like, you know, why is the valuation this way? We might have cash flow problems because we might have to go raise money again, you know, et cetera.

We can't because the valuation is too high. So I would urge, I think, you founders today to quote, unquote, like leave money on the table. Like there are some things that are not really worth optimizing. I think you should optimize for the quality of the company that you build, not like the value, which you raise that or the amount you raise, et cetera, so.

- In hindsight, 2020, but it looks like, you know, you made the right call there anyway. So maybe we should also, for people who are not clued into Retool, do a quick, like, what is Retool? You know, I see you as the kings or the inventors of the low-code internal tooling category.

Would you agree with that statement? Would, like, you know, how do you usually explain Retool? - I generally say it's like Legos for code. We actually hate the low-code moniker. We actually never, in fact, we have docs saying we will never use it internally. - Yeah. - Or even to customers.

And the reason for that is I think low-code sounds very not developer-y. And developers, they hear the phrase low-code, they're like, oh, that's not for me. Like, I love writing code. Like, why would I ever want to write less code? And so for us, Retool's actually built for developers.

Like, 95% of our customers actually are developers, actually, and so that is a little bit surprising to people. I'll generally explain it as, and this is, you know, kind of a funny joke, too. I think part of the reason why Retool's been successful is that developers hate building internal tools.

And you can probably see why. I mean, if you're a developer, you've probably built internal tools yourself. Like, it's not a super exciting thing to do. You know, it's like piecing together a CRUD UI. You've probably, you know, pieced together many CRUD UIs in your life before. And there's a lot of crunch work involved.

You know, it's like, hey, state management. It's like, you know, data validation. It's like display error messages, like domestic buttons. Like, all these things are not really exciting, but you have to do it because it's so important for your business to have high-quality internal software. And so what Retool does is basically allows you to sort of piece together an internal app really fast, whether it's a front-end, whether it's a back-end or whatever else.

So yeah, that's what Retool is. - Yeah, actually, so you started hiring, and so I do a lot of developer relations and community building work, and then you hired Kritika, who has now moved on to OpenAI, to start out your sort of DevRel function. And I was like, what is Retool doing according to developers?

And then she told me about this, you know, developer traction. And I think that is the first thing that people should know, which is that, like, actually the burden and weight of internal tooling often falls to developers, or it's an Excel sheet somewhere or whatever. But yeah, you guys have basically created this market.

In my mind, I don't know if there was someone clearly before you in this, but you've clearly taken over and dominated. Every month, YC, there's a new YC startup launching that is like, we're the open-source Retool, we're like the lower-code Retool, whatever. And it's pretty, I guess it's endearing.

We'll talk about Airplane later on, but yeah, I think that, I've actually used Retool in my previous startups for this exact purpose. Like, we needed a UI for AWS RDS that the rest of our less technical people, like our sales operations people could interact with, and yeah, Retool was perfect for that.

- Yeah, that's a good example of, like, that's an application that an engineer probably does not want to build. Like building an app on Salesforce or something like that is not exciting. And Salesforce API sucks, it's very limited, it's not a fun experience at all. But piecing together a Retool is quite a bit easier.

So yeah, let me know if you have any feedback, but awesome, thanks for using it. - Yeah, no, of course. Well, so, you know, like more recently, I think about three, four months ago, you launched Retool AI. Obviously, AI has been sort of in the air. I'd love for you to tell the journey of sort of AI products ideation within Retool.

Given that you have a degree in this thing, I'm sure you're not new to this, but like, when did the, when would you consider sort of the start of the AI product thinking in Retool? - Yeah, wow, that's funny. So we actually had a joke internally at Retool. We are a product roadmap for every year, I think it was like 2019 or something.

We had this joke, which was like, what are we gonna build this year? We're gonna build AI programming, is what we always said as a joke. And so, but it was funny 'cause we were like, ah, that's never gonna happen in my life. Let's add it because it's like a buzzwordy thing that enterprises love.

So let's look at it. And so it was almost like a funny thing, basically. But it turns out, you know, we're actually building that now. So this is pretty cool. So let's say maybe AI thinking at Retool probably first started maybe like, I would say maybe a year and a half ago, something like that.

And the evolution of our thinking was basically, when we first started thinking about it, sort of in a philosophical way, if you will, it's like, well, what is the purpose of AI? And how can it help what Retool does? And there were two sort of main prompts, if you will, of value that we got.

One was helping people build apps faster. And so you've probably seen that with Copilot, you've seen sort of so many other coding assistants, like P0 to that, you know, stuff like that. So that's interesting because, you know, engineers, as we talked about, do some grunt work. And the grunt work, you know, maybe could be automated by AI was sort of the idea.

And it's interesting, 'cause we actually, I would say kind of proved or disproved the hypothesis a little bit. If you talk to most engineers today, like a lot of engineers do use Copilot. But if you ask them, like, how much time has Copilot saved you? It's not like coding is 10x faster than before.

You know, coding is maybe like 10% faster, maybe 20% faster or something like that, basically. And so it's not like a huge step change, actually. And the reason for that, as we think, is because the sort of fundamental frameworks and languages have not changed. And so if you're building, let's say, you know, like the sales ops tool we were talking about before, for example, let's say you've got AI to generate, you do a first version of that, for example.

The problem is that it probably generated it for you in like JavaScript, 'cause you're, you know, writing for the web browser, for example, right? And then for you to actually go proofread that JavaScript, for you to go read the JavaScript to make sure it's working, you know, to fix the subtle bugs that AI might have caused, hallucinations, stuff like that, actually takes a long time and a lot of work.

And so for us, the problem is actually not like the process of coding itself. It is more sort of the language or the framework we think is like way too low level. It's kind of like, you know, like punched cards. Like, you know, let's say back in the day, you proved who designed punched cards, and AI could help you generate punched cards.

You're like, okay, you know, I guess that helps me, you know, punching cards is a little bit faster now 'cause I have a machine punching them for me. But like, when there's a bug, I still have to go read all the punched cards to figure out what's wrong, right?

It's like, it's a lot of work, actually. And so for us, that was the sort of initial idea was, can we help engineers code faster? That would, you know, I think it's somewhat helpful, to be clear. And again, I think it's 10 or 20%. So we have things like, you know, you can generate SQL queries by AI, you can generate UIs by AI and stuff like that.

So that's cool, to be clear. But it's not, I think, the step change of programming that we all wanted. And so that, I think, is, you know, we're investing somewhat in that. But the bulk of investment actually is a number two, which is helping developers build AI-enabled applications faster.

And the reason why we think this is so exciting is we think that practically every app, every internal app, especially, is going to be AI-infused over the next, like, three years. And so every tool you might imagine, so like the tool you were mentioning, like a sales operations tool, for example, probably, you know, if you were to build it today, it would incorporate some form of AI.

And so, you know, we see today, like for us, like a lot of people build, you know, I'll say sales manager tools, a retool. An example is there's a fortune, like a bunch of companies building, like, sales forecasting tools. So they basically have sales people enter their forecast, you know, for the quarter, you can have a recorder, like, hey, I have these deals, and these deals are going to close, these deals are not going to close.

You know, I think I'm upside in these, downside in these, stuff like that, basically. And what they're doing now is they're actually, so you can imagine just pulling in deals from your Salesforce database. And so it pulls in the deals that actually use AI to compute, like, okay, well, you know, given previous deal dynamics, like these are the deals that are more likely to close this month versus next month, close this quarter, next quarter, et cetera.

And so it could actually, you know, pre-write you a draft of, you know, your report, basically. And so that's an example where I think all apps, whether it's, you know, a sales app, you know, until let's say a fraud app, a, you know, FinTech app, you know, whatever it is, basically, especially internal apps, I think, like you said, Alessio, in order to make you a little productive, it's going to incorporate some form of AI.

So then the question is, can we help them incorporate this AI faster? So that's why we launched, like, a vector database, for instance, built directly into Retool. That's why we, you know, launches all this AI actually, so you don't have to, you know, go figure out what the best model is and do testing and stuff like that, just, you know, give it to you out of the box.

So for us, I think that is really the, really exciting future is can we make every app, mostly Retool, use AI a little bit and make people a little productive, so. - We talked with Jeffrey Wang, who's the co-founder and chief architect of Amplitude. He mentioned that you just used PostgreSQL Vector.

When you were building Retool Vectors, how do you think about, yeah, leveraging a startup to do it, putting vectors into one of the existing data stores that you already had? I think, like, you already have quite large customer scale, so, like, you're maybe not trying to get too cute with it.

Any learnings and tips from that? - Yeah, I think a general philosophical thing I think we believe is, we think the open source movement in AI, especially when it comes to all the supporting infrastructure is going to win. And the reason for that is we look at, like, developer tools in general, especially in such a fast-moving space.

In the end, like, there are really smart people in the world that have really good ideas, and they're going to go build companies, and they're going to go build projects, basically, around these ideas. And so, for us, we have always wanted to partner with maybe more open source kind of providers or projects, you could say, like PG Vector, for example.

And the reason for that is it's easy for us to see what's going on under the hood. A lot of this stuff is moving very fast. A lot of times there are bugs, actually, and so we can go look and fix bugs ourselves, and contribute back, for example. But we really think open source is going to win in this space.

It's, you know, it's hard to say about models. I don't know about models, necessarily, because, you know, it starts to get pretty complicated there. But when it comes to tooling, for sure, I think there's just, like, so much, and there's an explosion of creativity, if you will. And I think betting on any one commercial company is pretty risky, but betting on the open source sort of community and the open source contributors, I think it's a pretty good bet.

So that's why we decided to go with consumer games. - Is there any most underrated feature, like something that customers maybe love that you didn't expect them to really care about? I know you have, like, text-to-SQL, you have UI generation. There's, like, so many things in there. Yeah, what surprised you?

- Yeah, so what's really cool, and this is my sense of the AI space overall, you know, if you're a skater, take some YouTube as well, is that, especially in Silicon Valley, where a lot of the innovation is happening, I think there's actually not that many AI use cases, to be honest.

And AI, to me, even as of, what, like, January 19th or 2024, still feels like in search of truly good use cases. And what's really interesting, though, about retool, and I think we're in a really fortunate position, is that we have this large base of sort of customers, and a lot of these customers are actually much more legacy, if you will, customers.

And a lot of them actually have a lot of use cases for AI. And so, to us, I think we're almost in, like, a really perfect or unique spot. We're able to adopt some of these technologies and then provide them to some of these, like, older players. So one example that actually really shocked and surprised me about AI was, so we have this one, let's say, clothing manufacturer.

I think it's either the first or second largest clothing manufacturer in the world who's using retool, and, you know, a ginormous company with, you know, pretty multinational, you know, stores on, you know, pretty every mall in the world. And they were interested in, so they have one problem, which is they need to design styles every year for the next year, basically, for every season.

So, like, hey, it's just, like, summer 2024, for example, and we're going to design. And so what they used to do before is they would hire designers, and designers would go to study data. They'd be like, okay, well, it looks like, you know, maybe floral patterns are really hot in, like, you know, California, for example, in 2023, and, like, do I think it's going to be hot in 2024?

Well, let me think about it, I don't know, you know, so let me, maybe, and if so, if I believe that it's going to be hot, let me go design some floral patterns, actually. And what they ended up doing in retool, actually, is they actually automated a lot of this process away, retool, so they actually now have built a retool app that allows, actually, a non-designer, so, like, an analyst, if you will, to analyze, like, you know, who are the hottest-selling patterns, you know, particular geos, like, this was really hot in Brazil, this was really hot in China, this was really hot, you know, somewhere else, basically.

And then they actually feed it into an AI, and the AI, you know, actually generates, with DALI and other image generation APIs, actually generates patterns for them, and they print the patterns, which is really cool. And so that's an example of, like, honestly, in use case I would have never thought about, like, thinking about, like, you know, how clothing manufacturers create their next line of clothing, you know, for the next season, like, I don't know, I never thought about it, to be honest, nor did I ever think, you know, how it would actually happen.

And the fact that they're able to leverage AI, they actually, you know, leverage multiple things in retail to make that happen, it's really, really, really cool. And so that's an example where I think if you go deeper into sort of, if you go outside the Silicon Valley, there are actually a lot of use cases for AI, but a lot of this is not obvious.

Like, you have to get into the businesses themselves, and so I think we're, we personally are in a really fortunate place, but if, you know, you're working in the AI space and want to find some use cases, please come talk to us. Like, you know, we're really excited about marrying sort of technology with use cases, which I think is actually really hard to do right now, so let's not, so.

- You know, I have a bunch of, like, sort of standing presentations around, like, how this industry is developing, and, like, I think the foundation model layer is understood, the sort of lag chain, VectorDB, RAG layer is understood, and, like, what is, I always have a big question mark, and I actually have you and Vercel V0 in that box, which is, like, sort of the UI layer for AI, and, like, you know, you are perfectly placed to expose those functionalities to end users, even if you personally don't really know what they're going to use it for, and sometimes they'll surprise you (laughs) with their creativity.

One segment of this, and I do see some startups springing up to do this, is related to the things that, to something that you also build, but it's not strictly AI-related, which is Retool Workflows, which is the sort of canvas-y boxes and arrows, point and click, do this, then do that, type of thing, like, which every, you know, I hate that, you know, what are we calling low-code?

Let's, every internal tooling company (laughs) eventually builds, you know, I worked at a sort of workflow orchestration company before, and we were also discussing internally how to make that happen, but you are obviously very well-positioned to that. - Yeah, basically, like, you know, would you, do you think that there is an overlap between Retool Workflows and AI?

I think that, you know, there's a lot of interest in sort of chaining AI steps together. Do people, I couldn't tell if, like, that is already enabled within Retool Workflows, I don't think so, but you could sort of hook them together sort of jankily. Like, what's the interest there?

You know, is it all of a kind, ultimately, in your mind? - It is 100% of the time, and yes, so you could actually already, so a lot of people actually are building AI workflows down in Retool, which is, we can talk about that in a second, but a hot take here is actually, I think a lot of the utility in AI today, like, I would probably argue 60, 70% of the utility, like, you know, businesses are found in AI, is mostly via ChatGPT, and across the world, too.

And the reason for that is, I think the ChatGPT sort of UI, you could say, or interface, or user experience is just really quite good. You know, you can sort of converse with an AI, basically. And that said, there are downsides to it. If you talk to, like, a giant company, like a J.P.

Morgan Chase, you know, for example, they may be reticent to have people copy-paste it into ChatGPT, for example, even on ChatGP Enterprise, for example. Some problems are that, I think Chat is good for one-off tasks. So if you're like, hey, I want a first version of representation or something like that, you know, and help me write this first version of a doc or something like that, Chat is great for that.

It's a great, you know, very portable, you know, if you will, form factor. So you can do that. However, if you think about it, you think about some of the economic productivity more generally, like, Chat, again, will help you, like, 10 or 20%, but it's unlikely that you're gonna replace an employee with Chat.

Like, you're not gonna be like, oh, I have a relationship manager at J.P. Morgan Chase, and I've replaced him with an AI chatbot. Like, it's kind of hard to imagine, right? 'Cause like, the employees actually do a lot of things besides, you know, just, you know, generating, you know, maybe another way of putting it is like, Chat is like a reactive interface.

Like, it's like, when you have an issue, you'll go reach out to Chat, and Chat might solve it. But like, Chat is not gonna solve 100% of your problems. It'll solve, like, you know, 25% of your problems, like, you know, pretty quickly, right? And so what we think the next, like, big breakthrough in AI is, is actually, like, automation.

It's not just like, oh, I have a problem, let me go to a chatbot and solve it. Because like, again, like, you know, people don't spend 40 hours a week in a chatbot. This time, they've been like, two hours a week in a chatbot, for example. And so what we think can be really big, actually, is you're able to automate entire processes via AI.

Because then, you're really realizing the potential of AI. It's like, it's not just like, you know, a human copy-pasting data in to an AI chatbot, and then, you know, pasting it back out, or copying back out. Instead, it's like, the whole process now is actually done in an automated fashion without the human.

And that, I think, is what's gonna really unlock sort of big canonical productivity, or, and that's what I'm really excited about. And I think part of the problem right now is, you know, I'm sure you all thought a lot about agents. I think agents are actually quite hard. Because like, you know, the AI is wrong, like, you know, 2% of the time, but then you like, you know, screw, if you, let's say, you know, raise it to the power of seven, for example, that's actually wrong, you know, quite often, for example.

And so what we've actually done with workflows is, we prefer, what we've learned, actually, is that we don't want to generate the whole workflow for you via AI. Instead, what we want you to do, actually, is we want you to actually sort of drag and drop the workflow yourself.

And maybe you can get a vSphere or something by AI, but it's coded, basically. You should actually be able to modify the steps yourself. But every step can use AI. And so what that means is like, it's not the whole workflow is created by AI. So like every step is AI automated.

And so if you go back to, for example, like the users are talking about, you know, with a clothing manufacturer, that's actually a workflow, actually. So basically, what they say is like, hey, every day, we see all the data, you know, from our sales systems into our database. And then we, you know, do some data analysis.

And, you know, that's just, you know, raw SQL, basically, so it's nothing too surprising. And then they use AI to go generate the new ideas. And then the analysts will look at the new ideas and approve or reject them, basically. And that is like a, you know, that's true automation.

You know, it's not just like, you know, a designer copy pasting things as a chat sheet. We can be like, hey, you know, give me a design. It's actually, the designs are being generated. We generate 10,000 designs every day. And then you have to go and approve or reject those designs, which I think is a lot, you know, that's a lot more economically productive than just copy pasting stuff in a chat sheet.

So we think sort of the AI workflow space is a really exciting space. And I think that is the next step in sort of delivering a lot of business value by AI. I personally don't think it's, you know, via chat or, you know, via agents quite yet, so. - Yeah, yeah, I think that's a pretty reasonable take.

It's disconcerting because I, not, I mean, disconcerting only in the fact that, like, I know a lot of people are trying to build what you already have in workflows. (laughs) So you have that sort of, you're the incumbent sort of in their minds. I'm sure it doesn't feel that way to you, but I'm sure, you know, you're the incumbent in their minds and they're like, okay, like, you know, like how do I, you know, compete with Retool or, you know, differentiate from Retool?

And, you know, as you mentioned, you know, all these sort of connections, it does remind me that you're running up against Zapier. You're running up against maybe Notion in the distant future. And yeah, I think that there'll be a lot of different takes at this space and like whoever is best positioned to serve their customer in the way that they sort of need to shape is going to win.

Do you have a philosophy against, around like what you won't build? Like what do you prefer to partner and not build in-house? Because it seems, I feel like you build a lot in-house. - Yes, there's probably two philosophical things. So one is that we're developer first and I think that's actually one big differentiator between us and Zapier.

You know, we're so very rare to see them actually. The reason is we're developer first. Because developers like, if you're like building a sales ops tool, you're probably not considering Notion if you're a developer. You're probably like, I want to build this via React basically, or use Retool. And so, are we built for developers?

It's pretty interesting, actually. I think one huge advantage of some of the developers is that they actually don't prefer, like developers don't want to be given an institution. They want to be given the building blocks so they can do themselves to build the institution. And so for us, like, you know, actually, you know, interesting point that Equilibrium and Retool can get to is basically to say, hey, Retool's a consulting company and we basically build apps for everybody, for example.

And what's interesting is that we've actually never gotten to that Equilibrium and the reason for that is for some of the developers. Developers don't want, you know, like a consultant coming in and building all the apps for them. Developers are like, hey, I want to do it myself. Just give me the building blocks.

So give me the best table library. Give me, you know, good state management. Give me easy way to query the rest of the APIs and I'll do it myself, basically. So that is pretty, so we generally end up basically always building building blocks that are reusable by multiple customers.

We have, I think, basically never built anything specific for one customer. So that's one thing that's interesting. The second thing is when it comes to sort of, you know, let's say like in the AI space, we're going to build and we're not going to build, we basically think about whether it's a core competency or whether there are, whether there are unique advantages to us building it or not.

And so we think about the Workflows product. We think Workflows actually is a pretty core competency for us. And I think the idea that we could build a developer-first Workflows automation engine, it's actually, there's nothing, I mean, I think after we released, you know, Workflows, Virtual Workflows, there have been a sort of few copycats that are, I think, quite far behind, actually.

They sort of are missing a lot of more critical features. But like, if you look at the space, it's like Zapier on one side and then maybe like Airflow on the other. And so Virtual Workflows actually is fairly differentiated. And so we're like, okay, we should go build that.

This is the one that we built, so let's go build it. Whereas if you look at like Vectors, for example, you look at Vectors, you're like, wow, this is a pretty thriving space and pretty, you know, Vector databases. Does it make sense for us to go build our own?

Like, what's the benefit? Like, not much. We should go partner with, or go find technology off the shelf. In our case, it's Pagerected. And so for us, I think it's like, how much value does it add for customers? Do we have a different take on the space? Do we not?

And every product that we've launched, we've had a different take on the space and the products that we don't have a different take, we just adopt what's off the shelf. - Let's jump into the state of AI survey that you ran and maybe get some live updates. So you surveyed about 1,600 people last August.

So, you know, and I were this busy like five years ago. And there were kind of like a lot of interesting nuggets and we'll just run through everything. The first one is more than half the people, 52% said that AI is overrated. Are you seeing sentiment shift in your customers or like the people that you talk to, like as the months go by, or do you still see a lot of people, yeah, that are not in Silicon Valley, maybe say, "Hey, this is maybe not as worth changing as you all made it sound to be." - Yes, so actually I'll be running the survey again, actually in the next few months.

So I can let you know when it changes. It seems to me that it has settled down a bit in terms of sort of the, maybe like, I don't know, signal to noise, you could say. Like, it seems like there's a little bit less noise than before, but the signals actually run about the same in the sense that like, I think people are still trying to look for use cases.

I was saying with all this last year, like United States, again, and I think there are slightly more use cases, but still not substantially more. And I think as far as we can tell, a lot of the engineers surveyed, especially some of the comments that we saw, do feel like the companies are investing quite a bit in AI and they're not sure where it's going to go yet, but they're like, right, it could be big.

So I think we should keep on investing. I do think that based on what we're hearing from customers, if we're not seeing returns in like a year or something, there'll be more skepticism. So I think there is like a, it is time bound, if you will. - You finally gave us some numbers on Stack Overflow usage.

I think that's been a Twitter meme for a while, whether or not ChatGPT kills Stack Overflow. In the survey, 58 people said they used it less, and 94% of them said they used it less because of Copilot and ChatGPT, which, yeah, I think it kind of makes sense. I know Stack Overflow tried to pull a whole thing.

It's like, no, the traffic is going down because we changed the way we instrument our website, but I don't think anybody bought that. And then you add, right after that, expectation of job impact by function and operations, people, eight out of 10, basically, they think it's going to, AI is going to really impact their job.

Designers were the lowest one, 6.8 out of 10, but then all the examples you gave were designers of a job being impacted by AI. Do you think there's a bit of a dissonance, maybe, between the human perception of like, oh, my job can't possibly be automated? It's funny that the operations people are like, yeah, it makes sense.

I wish I could automate myself, you know, versus the designers that maybe they love their craft more. Yeah, I don't know if you have any thoughts on who will accept it first, you know, that they should just embrace the technology and change the way they work. - Hmm. Yeah, that's interesting.

I think it's probably going to be engineering driven. I mean, I think you two are very well, maybe you two even started some of this wave and sort of the AI engineer wave. I think the companies that adopt AI the best, it is going to be engineering driven, I think, rather than like operations driven or anything else.

And the reason for that is, I think the rise of this like profile of AI engineering, like AI is very philosophical. Like AI is a tool in my head. Like it is not a, in my head, I think we're actually pretty far from AGI, we'll see what happens. But AI is not like a thing that, it's not like a black box where like it does everything you want it to do.

The models that we have today require like very specific prompting, for example, in order to get like, you know, really good results. And the reason for that is, it's a tool that, you know, you can use it in specific ways. If you use it the wrong way, it's not going to produce good results for you, actually.

It's not like, you know, by itself taking a job away, right? So, I think actually, to adopt AI, it's probably going to be, going to have to be engineering first, basically, where engineers are playing around with it, figuring out all the bases of the models, figuring out like, oh, maybe like using vectorized databases is a lot better, for example.

Maybe like prompting in this particular way is going to be a lot better, et cetera. And that's not the kind of stuff that I think like an operations team is going to really be like experimenting with, necessarily. I think it really has to be engineering led. And then I think the question is, well, what are the engineers going to focus on first?

Like, are they going to focus on, you know, design first or like operations first? And that, I think, is more of a business decision. I think it's probably going to be more like, you know, the CEO, for example, says, hey, you know, we're having trouble scaling this one function.

So, like, why don't we try using AI for that? And let's see what happens, for example. And so in our case, for example, we are really, we have a lot of support issues. And what I mean by that is, we have a really, really high performance support team, but we get a lot of tickets.

And the reason for that is, you know, we're a very dynamic product, you can use it in so many different ways. And so we'll have a lot of questions for us, basically. And so we were looking at, well, you know, can we, for example, draft some replies to support tickets, you know, by AI, for example.

Can we allow our support agents to be, you know, hopefully, you know, double as, doubly productive as before, for example. And so, I guess I would say it's like, business needs driven, but then engineering driven after that. So like, you know, the business decides, okay, well, this is where AI could be most applied.

And then we assign the project to an engineer, and the engineer goes and figures it out. I honestly am not sure if like, the operation we're gonna have much of a, like, if they accept and reject it, I don't know if that's gonna change the outcome, if you will, so.

- Yeah, interesting. Another interesting part was the importance of AI in hiring. 45% of companies said they made their interviews more difficult, and the engineering side, made interviews more difficult to compensate for people using Copilot and ChatGPT. Has that changed at Retool? Like, have you, yeah, have you thought about it?

I don't know how much you're still involved with engineering hiring the company, but I'm curious how we're scaling the difficulty of interviews, even though the job is the same, right? So just because you're gonna use AI doesn't mean the interview should be harder, but I guess it makes sense.

- Yeah, for us, I think our sense, based on the survey, and this is true for what we believe, too, is that we are most, when we do engineering interviews, we are most interested in assessing, like, critical thinking, or thinking on the spot. And I guess, when you're hiring an employee, at the end, the job of the employee is to be productive, which they should use whatever tools they want to be productive, so that's kind of our thinking, too.

However, we do think that, if you think about it from a first-person's way, if your only method of coding is literally copy-pasting off of TractionBT, or just pressing Tab and Copilot, I think that would be concerning. And so, for that reason, we still do want to test for fundamentals, understanding of CompSci.

Now, that said, I think if you're able to use TractionBT or Copilot, let's say, competently, we do view that as a plus, we don't view it as a minus, but if you only use Copilot, and you aren't able to reason about how to write a for loop, for example, or how to write FizzBuzz, that would be highly problematic.

And so, for us, what we do today is we'll use a screen share, or we'll actually use a hackpad, actually. So, I guess there's no Copilot there, you can sort of see what they're doing, or see what they're thinking. And we really want to test for thinking, basically. But yeah, I mean, we, ourselves, imperially have embraced Copilot, and we would encourage engineers to build this Copilot, too.

But we do want to test for understanding of what you're doing, rather than just copy-pasting on Copilot, so. - The other one was AI adoption rate. Only 27% are in production. Of that 27%, 66% are internal use cases. Shout out to Retool, you know? How, do you have a mental model as to how people are gonna make the jump from, like, using it internally to externally?

Obviously, there's, like, all these different things, like privacy, like, you know, if an internal tool hallucinates, that's fine, because you're paying people to use it, basically, versus if it hallucinates to your customer, there's a different bar. Yeah, I don't know if you have thought about it. Because for you, if people build internal tool with Retool, they're external customers to you, you know?

So, I think you're on the flip side of it. - Yeah, I think it's hard to say. Maybe a core Retool belief is actually that most software built in the world is internal-facing, actually, which actually may sound kind of surprising, you know, first time you're hearing this, but effectively, like, you know, we all work at Silicon Valley, right?

Like, we all work at businesses, basically, that sell software as, you know, as sort of a business. And that's why all the software engineers that we hire basically work on external-facing software, which makes sense, because we're software companies. But if you look at most companies in the world, most companies in the world are actually not software companies.

If you look at, like, you know, the clothing manufacturers I was talking about, they're not a software company. Like, they don't sell software, you know, to make money. They sell clothing to make money. And most companies in the world are not software companies, actually. And so, most of the engineers in the world, in fact, don't work at Silicon Valley companies.

They work outside of Silicon Valley. They work in these sort of more traditional companies. So, if you look at the Forge of 100, for example, probably, like, 20 of them are software companies. You know, there are 480 of them are not software companies. And actually, they employ those software engineers.

And so, most of the software engineers in the world, and most of the code engineers in the world, actually goes towards these internal-facing applications. And so, like, for all the reasons you said there, like, I think hallucination matters less, for example, 'cause they have someone checking the output, and consumer, so hallucination is more okay.

It's more acceptable, as well. Yeah, it can be unreliable because it's probabilistic, and that's also okay. So, I think it's kind of hard to imagine AI being adopted in a consumer way without the consumer, like, opting in. Like, Chats with BT is very obviously a consumer. The consumer knows that it's Chats with BT.

They're using it. I don't know if it's going to make its way to, like, the banking app any time soon. Maybe for, like, even for support, it's hard, because if it hallucinates, then, you know, it's actually quite bad for support if you're hallucinating. So, it's, yeah, it's hard to say.

I'm not sure. - Yeah, but that's a good idea, like, insight, you know? Yeah, I think a lot of people, like you said, we all build software, so we expect that everybody else is building software for other people, but most people just want to use the software that we build out here.

I think the last big bucket is, like, models breakdown. 80% of people you survey just use OpenAI. Some might experiment with smaller models. Any insight from your experience at WeTool, like building some of the AI features? Have you guys thought about using open source models? Have you thought about fine-tuning models for specific use cases?

Or have you just found GV84 to just be great at most tasks? - Yeah, so two things. One is that from a data privacy perspective, people are getting more and more okay with using a hosted model, like a GPT-4, for example, especially because GPT-4 or OpenAI often has to have enterprise equipment in some companies already, 'cause I think a lot of CIOs are just like, let's get a second house, like, you know, let's use Azure, for example, and, you know, let's make it available for employees to experiment with.

So I do think there is more acceptance, if you will, today of feed data into GPT. That's gonna take some sensitive data. People might not want to do so, like, you know, feeding in, like, earnings results data, you know, three days before you announce earnings, like, probably is a bad idea.

They probably don't want GPT writing your, like, earnings statement for you. So, you know, there's still some challenges like that, that I think actually open source models could actually help solve, like a lot of three, you know, when it comes to, and that can be exciting. So that's maybe just one thought.

The second thought is, I think OpenAI has been really quite smart with sort of no pricing, and they've been pretty aggressive of, like, let's get, you know, let's create this model and sell it at a pretty cheap price to make it such that there's no reason for you to use any other model.

Just from, like, a strategy perspective, I don't know if that's gonna work. And the reason for that is you have really well-funded players, like a Google or like a Facebook, for example, that are actually quite interested. Like, I think if OpenAI was competing with startups, OpenAI would win for sure.

Like, at this point, OpenAI is so far ahead from both a model and a pricing perspective that, like, there is no reason for it to go, I think, in my opinion, at least, a startup model. But if, like, you know, Facebook is not gonna give up on AI. Like, Facebook is investing a lot in AI, in fact.

And so competing against a large FANG company that is making their model open source, I think that is challenging. Now, however, where we are right now is, I think, GPT-4 is so far ahead in terms of performance that, and I would say model performance is so important right now because, like, the average, you know, like, you know, you can argue Lama 2 is actually so far behind, but, like, customers don't want to use Lama 2 'cause it's so far behind right now.

And so that, I think, is part of the challenge. As AI progress slows down, so if we get, like, Lama 4, Lama 5, for example, maybe it's comparable at that point, like GPT-5 or GPT-6, like, it may get to the point where it's like, look, I just want to use Lama.

Like, it's safer for me to, you know, host it on-prem. It's just as fast, just as cheap. Like, why not, basically? But I think right now we are in this state where open AI is executing really well, I think. And right now they're thriving, but let's see what happens in the next year or two, so.

- Awesome, and there's a lot more numbers, but we're just gonna send people to the link in the show notes. Yeah, this was great, Sean. Any other thoughts, threads we wanna pull on while we have David? - Well, what are you gonna ask differently for the next survey? Like, what info do you really actually want to know that's gonna change your worldview?

- I'll also ask you that, but if you have any ideas, let me know. For us, actually, we're planning on asking very similar questions, because for us, the value of the survey is mostly seeing changes over time and understanding, like, okay, wow. For example, GPT-4 Turbo MPS has declined.

That would be interesting, actually. One thing that was actually pretty shocking to us was, let me find the exact number, but if you look at the one change that we saw, for example, if you compare GPT-3.5 MPS, I wanna say it was like 14 or something. It was not high, actually.

The GPT-4 MPS thing was like 45 or something like that, so it was actually quite a bit higher. So I think that kind of progress over time is where we're most interested in seeing, is our models getting worse, models getting better? Are people still loving PG-Vector? Do people still love Mongo, stuff like that?

That, I think, is the most interesting thing, so. Do you two have any questions that you think we should ask? - Off the bat, it seems like you're very language model-focused. I think that there's an increasing interest in multimodality in AI, and I don't really know how that is going to manifest.

Obviously, GPT-4 Vision, as well as Gemini, both have multimodal capabilities. There's a smaller subset of open-source models that have multimodal features as well. We just released an episode today talking about Idafix from Hugging Face. And yeah, so I think I would like to understand how people are adopting or adapting to the different modalities that are now coming online for them, what their demand is relative to, for, let's say, generative images, versus just visual comprehension, versus audio, versus text-to-speech.

What do they want? What do they need? And what's the forced, stack-ranked preference order? - That's a brilliant lawsuit, yeah. I wonder what it is, I don't know, yeah. - It's something that we're trying to actively understand, because there's this multimodality world, but really, multimodalities is like an umbrella term for actually a whole bunch of different things that are, quite honestly, not really that related to each other, unless in the limit, which it tends towards, maybe everything uses transformers, and ultimately everything can be merged together with a text layer, because text is the universal interface.

But given the choice between, like if I want to implement an audio feature versus I want to implement an image feature versus video, whatever, what are people needing the most? What should we pay the most attention to? What is going to be the biggest market for builders to build in?

I don't know. - Yeah, we'll go ask that. Yeah, that's a great question. - I know, I think, Sean, you put in a question from Joseph here in the show notes, our friend Joseph Nelson, founder of Roboflow and Office. Cool coworker, I guess. - Yeah, sure. So I figure we'll just kind of zoom out a little bit to just the general founder questions.

You have a lot of fans in the founder community, and I think you're just generally well-known as a very sort of straightforward, painstaking person about just business. Something that is the perception from Joseph is that you have been notably sort of sales-led in the past. That's his perception. I actually never got that, but I'm not that close to sort of your sales portion.

And it's interesting to understand your market, like the internal tooling market, versus all the competition that's out there, right? There's a bunch of open-source retools, and there's this bunch of like, I don't know how you sort of categorize the various things out there, but effectively what he's seeing and what he's asking is how do you manage between sort of enterprise versus ubiquity, or in other words, enterprise versus bottom-up, right?

I was actually surprised when he told me to ask that question, because I had always assumed that you were a self-serve, sign-up, like bottom-up-led. But it seems like you have a counter-consensus view on that. - Yeah. Let me think about that for a second. Yeah, so actually when retail first started, we started mostly by doing sales, actually.

And the reason we started by doing sales was mostly because we weren't sure whether we had product-market fit, and sales seemed to be the best way of proving whether we had product-market fit out. Because I think this is true of a lot of AI projects. You can watch a project, and people might use it a bit, and people might stop using it, and you're like, well, I don't know, is that product-market fit?

Is that not? It's hard to say, actually. However, if you work very closely with the customer in sort of a sales-led way, it's easier to understand their sort of requests, understand their needs, and stuff like that, and actually go build a product that actually serves them really well. And so basically, we viewed sales as like working with customers, basically, which is like, I think actually quite a, I think it's a better way of describing what sales is at an early-stage company, and so we did a lot of that, certainly, when we got started.

I think we, over the last maybe five years, maybe like three years ago, four years ago, something like that, I think we have invested more on the self-serve ubiquity side. And the reason for that is, when we started Retool, we always wanted, actually, some percent of software to get built inside of Retool.

Whether AI software or origin software, or broadly, UIs, but like software, basically. And for us, we were like, we think that maybe one day, 10% of all the code in the world could be written inside of Retool, actually, or 10% of the software could be running on Retool, which would be really, really cool.

And for us to achieve that vision, it really does require a broad-basis option of the platform. It can't just be like, oh, only like 1,000 customers, but the largest 1,000 companies in the world use it. It has to be like, all the developers in the world use it. And for us, there's like, well, I think 25, 30 million developers in the world.

This is, of course, how do you get to all the developers? And the only way to get to all those developers is not by sales. You can't have a salesperson talk to 30 million people. It has to be, basically, in this sort of bottoms-up, product-led, ubiquity kind of way, basically.

And so, for us, we actually changed our focus to be ubiquity, actually, over the last year. So when we first started Retool, it used to always be sort of revenue-generated, or in a new way, R-generated. We actually changed it to be number of developers building on the platform, actually, last year.

And that, I think, was actually a really clarifying change, because obviously, revenue was important. You know, it funds a lot of our product, and it funds the business. But we're going to fail if we aren't able to get to something like 10, 20, 30 million developers one day. We can't convince all developers that Retool's a better way of building a sort of class of software, let's say, internal applications, for today.

And so, I think that has been a pretty good outcome. Like, I think about, you know, the last, like, I don't know, five years of Retool. Like, I think the starting off with sales, so you can build revenue, and then you can actually build traction, and then you can hire more slowly, I think it was really good.

I do think the focus towards, like, you know, bottom-left ubiquity also is really important, because it helps us get to our long-term outcome. What's interesting, I think, is that long-term ubiquity, actually, is harder for us to achieve outside of Silicon Valley. Like, to your point, I think at Silicon Valley, Retool is, like, reasonably ubiquitous.

I think, like, if you're starting a startup today, and you're looking to build a internal UI, you're probably going to consider Retool, at least. Maybe you don't choose it, because you're like, "Hey, I'm not ready for it yet," or something, but you're going to consider it, at least. And when you want to build it, I think it's actually a high probability you will actually end up choosing Retool.

It's awesome. But it's that, you know, if you think about, you know, a random developer working at, let's say, like, an Amazon, for example. We actually, so, today at Amazon, actually, we have, I think, 11 separate business units that use Retool at this point, which is really awesome. So, Amazon's actually a big Retool customer.

But, like, the average user at Amazon probably has never heard of Retool, actually. And so, that is where the challenge really is. How do we get, like, you know, I don't know, let's say 10,000 developers at Amazon building via Retool? And that, again, I think, is still a bottoms-up ubiquity thing.

I don't think that's, like, a, I don't think we can, like, you know, go to Amazon and knock on every developer's door or send out an email to every developer and be like, "Go use Retool." They're going to ignore us, actually. I think it has to be, use the product, and you love it, and you tell your coworker about it.

And so, for us, a big bottoms-up ubiquity, but marrying that with, sort of, enterprise or the community business has been something that's really near and dear to our hearts. - Yeah, and just, like, general market thoughts on AI. Are you, you know, do you spend a lot of time thinking about, like, AGI stuff or regulation or safety?

Or, like, what interests you most, you know, outside of the Retool context? - Wow. Well, I'll give you a little bit of the Retool context, because your actual question. In my opinion, there's a lot of hype in AI right now, and there's, again, not too many use cases. So, for us, at least from a Retool context, it really is, how do we bring AI and have it actually meet business problems?

And, again, it's actually pretty hard. Like, I think most founders that I meet at the AI space are always looking for use cases, never have enough use cases, sort of, real use cases people want to pay money for. So, I don't think really where the Retool interest comes from.

Me, personally, I think, philosophically, yeah, I've been thinking recently, myself, a bit about sort of intentionality and AGI, and like, you know, what would it take for me to say, yes, GPT-X or any sort of model actually is AGI? I think it's kind of challenging, because it's like, I think, if you look at, like, evolution, for example, like, humans have been programmed to do, like, three things, if you will.

Like, you know, we are here to survive, you know, we're here to reproduce, and we're here to like, you know, maybe those are just two things, I suppose. So, like, basically, to survive, you have to go eat food, you know, for example. To survive, maybe, like, having more resources helps you want to go make money, you know, for example.

To reproduce, you should go date, you know, or whatever. You should get married and stuff like that, right? So, like, that's, we have a program to do that. And humans that are good at that have propagated. And so, humans that, you know, we're not actually surviving, probably have disappeared, or just do the natural selection.

Humans that we're not interested in reproducing also disappeared, because, or, you know, there are less of them, you could say, because they just, they're just up here and gone, basically. And so, it almost feels like humans have been sort of naturally self-selected for these, like, two aims. I think the third aim I was thinking about was, like, does it matter to be happy?

Like, maybe it does. So, maybe, like, happier humans, you know, survival? It's hard to say. So, I'm not sure. But if you think about that, in the realm of, like, AIs, if you will, like, right now, we're not really selecting AIs for, like, you know, reproduction. Like, it's not like, you know, we're being like, "Hey, AI, you know, you should go make 30 other AIs." And, you know, those that make the most AIs, you know, are the ones that survive.

We're not saying that. So, it is kind of interesting, sort of, thinking about where intentionality for humans come from. And, like, I think you can argue the intentionality for humans basically comes out of these three things. You know, like, you know, if you want to be happy, you want to survive, you want to reproduce.

That's, like, basically your sort of goal, you know, in life. Whereas, like, the AI doesn't really have that. But maybe you could program it in. Like, if you, you know, prompt inject, for example, like, "Hey, AI, you know, go do these things." And, you know, you can even create a simulation, if you will, of, like, all these AIs, you know, in the world, for example.

And maybe you don't have AGI in that world, which I think is kind of interesting. So, that's kind of stuff I've been thinking about when I talk about with some of my friends from a sort of philosophical perspective, but that's kind of interesting. - Yeah, my quick response to that is we're kind of doing that, maybe not at the sort of trained final model level, but at least at the datasets level, there's a lot of knowledge being transferred from model to model.

And if you want to think about that sort of evolutionary selection pressure, it is happening in there. And, you know, I guess one of the early concerns about being in Sydney and sort of, like, bootstrap, self-bootstrapping AGI is that it actually is, if these models are sentient, it actually is in their incentive to get as much of their data out there into our datasets so that they can bootstrap themselves in the next version that gets trained.

(laughs) And that is a scary sobering thought that we need to try to be on top of. - David, I know we're both fan of Hofstadter's GEB. And actually, I saw in one of your posts on the Sequoia blog, you referred to the anteater, like, piece of the, I don't even know if you call them chapters, and GEB is just kind of like this discontinuous riff, but basically like how ants are like not intelligence, but like ant colony has signs of intelligence.

And I think Hofstadter then used that to say, hey, you know, neurons are kind of like similar, and then computers maybe will be the same. I've always been curious if, like, we're drawing the wrong conclusion for, like, neural networks where people are like, oh, each weight is like a neuron, and then you tie them together, it should be like a brain.

But maybe like the neuron is like different models that then get tied together to make the brain. You know, we're kind of looking at the wrong level of abstraction. Yeah, I think there's a lot of interesting philosophical discussions to have. Sean and I recorded a monthly recap podcast yesterday, and we had a similar discussion on are we using the wrong, are we, like, what did you say, Sean, on the plane and the bird?

I think that was a good analogy. - Oh, the sour lesson. Are we using the wrong analogies? Because we're trying to be inspired by human evolution and human development, and we are trying to apply that analogy strictly to machines. But in every example in history, machines have always evolved differently than humans.

So why should we expect AI to be any different? - Yeah, if you sort of peer under the hood of AGI, if you insist on AGI, we have always used AGI for things like a human, and that is the Turing test, I suppose, but whether that is a good point.

Like, if it works, no, it's not the Turing test. The Turing test is if the output is the same as a human, then I'm happy, basically. I don't really care about what's going on inside. And so it feels like caring about the inside is, like, a pretty high bar.

Like, why do you care? It's kind of like the plane thing. Like, for flies, it's not a bird, I agree. It does not fly necessarily the same way as a bird. Physically, it does, I suppose. But you see what I mean. Like, it's not the same under the hood, but it's okay for the flies.

That's what I care about. And it does seem to be like AGI is probably, like, doesn't think and can achieve, like, outcomes that I give it and can achieve its own outcomes. And it can do that. Like, I kind of don't care what it is, like, under the hood.

It may not need to be human-like at all. It doesn't matter to me. So I agree. - Awesome. I know we kept you long. I actually have GUB right here on my bookshelf. Sometimes I pick it up and I'm like, "Man, I can't believe I got through it once." It's quite the piece of work.

- It's a lot of homework, too. - Yeah, I mean, I started studying physics in undergrad, so, you know, it's one of the edgy things that every physicist starts going through. But thank you so much for your time, David. This was a lot of fun, and looking forward to the 2024 set of AI results to see how things change.

- Yeah, we'll let you know. So thanks, both. (upbeat music) (upbeat music continues) (upbeat music continues) (upbeat music continues) (upbeat music continues) (upbeat music continues) (upbeat music)