back to indexMaking Transformers Sing - with Mikey Shulman of Suno
Chapters
0:0 Introduction
1:55 State of Music Generation Models
7:40 AI Data Wars & Copyright
11:56 Going from ML in finance to music generation
14:19 Suno's TTS origins with Bark and Parakeet
18:42 Easy vs Expert mode for music
24:53 The Midjourney of Music?
26:47 Live demo
40:19 Remaking vs Creating
43:0 Suno's direction
46:58 Beyond single track generation
49:11 Favorite Suno usage in the wild
51:23 The 2 mins overview of the audio generation space
54:42 Benchmarking AI
00:00:00.000 |
Hey everyone, welcome to the Live in Space podcast. 00:00:04.060 |
This is Alessio, partner and CTO at Residence and Decibel Partners, and I'm joined by my 00:00:10.960 |
Hey, and today we are in the remote studio with Mikey Schulman. 00:00:19.520 |
So I'd like to go over people's background on LinkedIn and then maybe find out a little 00:00:25.520 |
You did your bachelor's in physics and then a PhD in physics as well before going into 00:00:33.320 |
Kensho Technologies, the home of a lot of top AI startups, it seems like, where you're 00:00:44.180 |
You're also a lecturer at MIT, we can talk about that, what you talked about. 00:00:48.800 |
And then about two years ago, you left to start Suno, which is recently burst on the 00:00:56.600 |
scene as one of the top music generation startups. 00:01:00.800 |
So we can talk, we can go over that bio, but also I guess what's not on your LinkedIn that 00:01:09.240 |
I wish I were better, but that doesn't make me not enjoy playing real music. 00:01:19.760 |
Are you one of those people that, you know, they do the TikToks, they use like 50 tools 00:01:25.280 |
to like grind the beans and then like brush them and then like spray them. 00:01:31.480 |
I confess there's a spray bottle for beans in the next room, there's one of those weird 00:01:47.200 |
I played a lot of piano growing up and I play bass and I, in a very mediocre way, play guitar 00:02:00.100 |
As Sean mentioned, you guys kind of burst into the scene as maybe the state of the art 00:02:07.160 |
I think it's a model that we haven't really covered in the past. 00:02:11.120 |
So I would love to maybe for you to just give a brief intro of like, how do you do music 00:02:20.200 |
Because I think people understand you take text and you have a predict the next word 00:02:24.160 |
and you take a diffusion model and you basically like add noise to an image and then kind of 00:02:30.080 |
But I think for music, it's hard for people to have a mental model. 00:02:33.560 |
Like what's the, how do you turn a music model on? 00:02:36.240 |
Like what does a music model do to generate a song? 00:02:41.600 |
Maybe I'll even take one more step back and say it's not even entirely worked out. 00:02:47.360 |
I think the same way it is in text and so it's an evolving field. 00:02:51.960 |
If you take a giant step back, I think audio has been lagging images and text for a while. 00:03:00.200 |
So I think very roughly you can think audio is like one to two years behind images and 00:03:05.080 |
But you kind of have to think today like text was in 2022 or something like this. 00:03:15.680 |
It looks like it works, but it's far, far less established. 00:03:18.760 |
And so, you know, I'll give you the way we think about the world now, but just with the 00:03:23.760 |
big caveat that I'm probably wrong if we look back in a couple of years from now. 00:03:30.520 |
And I think the biggest thing is you see both transformer based and diffusion based models 00:03:38.240 |
I know people will do some diffusion for text, but I think nobody's like really doing that 00:03:45.160 |
And so we prefer transformers for a variety of reasons. 00:03:48.680 |
And so you can think it's very similar to text. 00:03:51.760 |
You have some abstract notion of a token and you train a model to predict the probability 00:04:01.840 |
You can think in anything language model is just something that assigns likelihoods to 00:04:10.600 |
In our case, they correspond to music or audio in general. 00:04:14.040 |
And I think we've learned a lot from our friends in the text domain from the pioneers doing 00:04:20.680 |
this of how well these transformer models work, where do they work, where do they not 00:04:26.040 |
But at its core, the way we like to do things with transformers is exactly like it works 00:04:32.820 |
Let me predict the next tiny little bit of audio. 00:04:34.920 |
And I can just keep doing that and doing that and generating audio as long as I want. 00:04:38.960 |
Yeah, I think the temptation here is to always try to bake in some specialized knowledge 00:04:50.080 |
And obviously you will get an improvement in your output if you try to just say like, 00:04:57.080 |
"Here's a set of tokens that only do jazz or only do voices." 00:05:06.680 |
How general do you make it versus how specific do you make it? 00:05:10.600 |
We've always tried to do things "the right way," which means that at the beginning, things 00:05:16.120 |
are going to be hard and worse than other ways. 00:05:20.000 |
But that is to say, bake in as little implicit knowledge as possible. 00:05:26.500 |
And so, the same way you don't program into GPT, you don't say this is a noun and this 00:05:31.360 |
is a verb, but it has implicitly learned all of those things. 00:05:35.480 |
I've never seen GPT accidentally put a noun where it meant to put an article in English. 00:05:41.960 |
We try not to impose anything about music or audio in general into the model and we 00:05:47.840 |
kind of let the models learn things by themselves. 00:05:49.940 |
And I think things are beginning to pay off, but it's not necessarily obvious from the 00:05:55.840 |
beginning that that was the right thing to do. 00:05:57.400 |
So for example, you could take something like text-to-speech and people will do all sorts 00:06:03.800 |
of things where you can program in things like phonemes to be the basis for what you 00:06:08.960 |
And then that kind of limits you to the set of things that are expressible by phonemes. 00:06:12.800 |
And so, ultimately that works really well in the short term. 00:06:18.160 |
And so, our approach has always been to try to do this in its full generality, as end-to-end 00:06:23.560 |
as we can do it, even if it means that in the short term we were a little bit worse. 00:06:29.680 |
We have a lot of confidence that in the long term that will be the right way to do it. 00:06:33.040 |
And what's the data recipe for turning a good music model? 00:06:45.560 |
And I think this is the biggest area where we have, you know, sort of our secret sauce. 00:06:51.440 |
I think to a large extent what we do is we benefit from all of the beautiful things people 00:06:58.600 |
do with transformers and text, and we focus very hard basically on how do I tokenize audio 00:07:04.840 |
And without divulging too much secret sauce, it's at least similar to how it's done in 00:07:13.400 |
You will have different models that learn to encode audio in discrete representations. 00:07:18.840 |
And a lot of this boils down to figuring out the right, let's say, implicit biases to put 00:07:27.520 |
How do I make sure that I can produce kind of all audio arbitrarily? 00:07:32.640 |
That's speech, that's background music, that's vocals, that's kind of everything to make 00:07:37.600 |
sure that I can really capture all the behavior that I want to. 00:07:42.440 |
And in terms of some of, we had our monthly recap last month, and the data wars were kind 00:07:51.400 |
You saw the New York Times lawsuit against OpenAI, because you have obviously large language 00:07:59.000 |
You don't have large music models in production. 00:08:01.600 |
So I think there's maybe been less of a trade there, so to speak. 00:08:08.600 |
There's obviously a lot of copyright-free, royalty-free music out there. 00:08:13.160 |
Is there any kind of power law in terms of, hey, the best music is actually much better 00:08:19.760 |
Or in music, does it not really matter because the structure of some of the musical structure 00:08:27.840 |
I don't think we know these things nearly as well as they're known in text. 00:08:32.040 |
We have some notions of some of the scaling laws here, but I think, yeah, we're just so, 00:08:39.960 |
You know, what I will say is that people are always surprised to learn that we don't only 00:08:48.420 |
And I usually give the analogy of some of the code generation models, so take something 00:08:52.920 |
like Code Llama, which is, as far as I know, the best open source code generating model. 00:09:01.700 |
And it's trained on a bunch of English, not only just code. 00:09:06.400 |
And it's because there are patterns in English that are going to be useful. 00:09:09.520 |
And so, you can imagine, you don't only want to train on music to get good music models. 00:09:13.280 |
And so, for example, one of the places that we are particularly bad is vocals and capturing 00:09:20.880 |
And so, you might imagine that there's other types of human vocals that you can put into 00:09:25.120 |
your model that are not music that will help it learn stuff. 00:09:28.920 |
And so, again, I think it's like super, super early. 00:09:31.660 |
I think we've barely scratched the surface of what are the right ways to do this. 00:09:38.460 |
From a progress perspective, there's like a lot of low-hanging fruit for us to still 00:09:42.220 |
And then, once you get the final model, I would love to learn more about the size of 00:09:46.180 |
these models, because people are confused when stable diffusion is so small. 00:09:49.980 |
They're like, "Oh, this thing can generate like any image. 00:09:52.460 |
How is it possible that it's like, you know, a couple of gigabytes?" 00:09:55.780 |
And then the large language models are like, "Oh, these are so big, but they're just text 00:10:03.100 |
And as you think about, yeah, you mentioned scaling and whatnot. 00:10:06.740 |
Is this something that you see it's kind of easy for people to run locally or not? 00:10:11.140 |
Our models are still pretty small, certainly by tech standards. 00:10:15.940 |
I confess I don't know as well the state of the art on how diffusion models scale, but 00:10:21.220 |
our models scale similarly to text transformers. 00:10:29.920 |
We care a lot about how many tokens per second we can generate, because we need to stream 00:10:38.540 |
And so, that is a big one that I think probably has us never get to 175 billion parameter 00:10:47.540 |
Maybe I'm wrong there, but I think that would be technologically difficult. 00:10:52.100 |
And then the other thing is that so much progress happens in shrinking models down for the same 00:10:55.560 |
performance in text that I'm hopeful, at least, that a lot of our issues will get solved and 00:11:01.820 |
we will figure out how to do better things with smaller models or relatively smaller 00:11:07.660 |
But I think the other thing, it's a blessing and a curse, I think, the ability to add performance 00:11:15.580 |
It's like a very straightforward way to make your models better. 00:11:18.060 |
You just make a bigger model, dump more compute into it. 00:11:21.820 |
It's also a curse, because that is a crutch that you will always lean on and you will 00:11:25.220 |
forget to do some of the basic research to make your stuff better. 00:11:28.940 |
And honestly, it was almost early on when we were doing stuff with small models for 00:11:40.780 |
We ended up having to learn a lot of stuff to make models better that we might not have 00:11:46.380 |
learned if we had immediately jumped to like a really, really big model. 00:11:49.220 |
And so I think for us, we always try to skew smaller to the extent possible. 00:11:59.020 |
I'm curious about just sort of your overall evolution so far. 00:12:03.620 |
Something I think we may have missed in the introduction is why did you end up choosing 00:12:12.340 |
You have this pretty scientific physics and finance background. 00:12:20.740 |
Like a lot of us have interests in music, but we don't necessarily choose to work in 00:12:27.460 |
I have a really fun job as a result, but all the co-founders of Suno worked at Kencho together 00:12:38.340 |
In fact, all text until we did one audio project that was speech recognition for kind of very 00:12:49.080 |
And I think the long and short of it is we kind of fell in love with audio, not necessarily 00:12:56.060 |
We all happen to be musicians and audiophiles and music lovers, but it was the combination 00:13:03.220 |
of audio and AI that we initially really, really fell in love with. 00:13:11.120 |
It's so far behind images and text that there's like so much more to do. 00:13:16.920 |
And honestly, I think a lot of people when we started a company told us to focus on speech. 00:13:23.040 |
If we wanted to build an audio company, everyone said, you know, speech is a bigger market. 00:13:30.400 |
But I think there's something about music that's just so human and so like almost couldn't 00:13:38.220 |
prevent us from doing it, like we almost like we just couldn't keep ourselves from building 00:13:42.520 |
music models and playing with them because it was so much fun. 00:13:48.480 |
You know, in fact, the first thing we ever put out was a speech model, it was Bark. 00:13:52.280 |
It's this open source text-to-speech model and it got a lot of stars on GitHub. 00:13:56.560 |
And that was people telling us even more, like go do speech and like we almost couldn't 00:14:02.520 |
And so, I don't know, maybe it's a little bit serendipitous, but we haven't really like 00:14:09.400 |
I don't think there was necessarily like an "aha" moment, it was just like organic and 00:14:15.360 |
just obvious to us that this needs to like we want to make a music company. 00:14:19.360 |
So you do regard yourself as a music company because as of last month, you're still releasing 00:14:28.960 |
So that's a really awesome collaboration with our friends at NVIDIA. 00:14:33.440 |
I think we are really, really focused on music. 00:14:37.120 |
I think that is the stuff that will really change things for the better. 00:14:41.760 |
I think, you know, honestly, everybody is so focused on LLMs for good reason and information 00:14:50.300 |
And I think it's way too easy to forget that there's this whole other side of things that 00:14:57.200 |
And maybe that market is smaller, but it makes people feel and it makes us really happy. 00:15:03.560 |
I think that doesn't mean that we can't be doing things that are related that are in 00:15:11.120 |
And so, like I said, audio is just so far behind. 00:15:13.840 |
There's just so much more to do in the domain more generally. 00:15:20.200 |
Yeah, I did hear about Zuno first through Bark. 00:15:30.240 |
Because obviously, I think there was a lot of preceding TTS work that was in open source. 00:15:35.960 |
How much of that was sort of brand new from your research? 00:15:41.000 |
What's the intellectual lineage there, just to cover out the speech recognition side? 00:15:46.080 |
So it's not speech recognition, it's text to speech. 00:15:48.320 |
But as far as I know, there was no other, certainly not in the open source, text to 00:15:58.220 |
Everything else was what I would call the old style of doing things, where you build 00:16:01.600 |
these kind of single purpose models that are really good at this one narrow task. 00:16:06.020 |
And you're kind of always data limited, and the availability of high quality training 00:16:13.440 |
And I don't think we're necessarily all that inventive to say we're going to try to train 00:16:19.080 |
in a self-supervised way, a transformer based model that on kind of lots of audio, and then 00:16:26.240 |
kind of tweak it so that we can do text to speech based on that, that would be kind of 00:16:29.880 |
the new way of doing things in a foundation model is the buzzword, if you will. 00:16:35.000 |
And so, you know, we built that up, I think, from scratch, a lot of shoutouts have to go 00:16:39.960 |
to lots of different things, whether it's papers, but also, it's very obvious. 00:16:46.400 |
There's a big shout out to Andrej Karpathy's NanoGPT, you know, there's a lot of code borrowed 00:16:52.920 |
from there, I think we are huge fans of that project, it's just to show people how you 00:17:00.640 |
And it's like, yeah, it's actually not all that much code to make performant transformer 00:17:06.700 |
And, you know, again, the stuff that we brought there was, how do we turn audio into tokens, 00:17:11.200 |
and then we can kind of take everything else from the open source. 00:17:17.720 |
And we were, I think, pleasantly surprised by the reception by the community got a good 00:17:24.320 |
number of GitHub stars, and people really enjoyed playing with it, because it made really 00:17:31.260 |
And I think this is, again, the thing about doing things in a quote, unquote, right way, 00:17:35.600 |
if you have a model where you've had to put so much implicit bias for this one very narrow 00:17:40.520 |
task of making speech that sounds like words, you're going to sacrifice on other things. 00:17:45.400 |
And in the text to speech case, it's how natural the speech sounds. 00:17:49.800 |
And it was almost difficult to pull unnatural sounding speech out of bark because it was 00:17:54.240 |
self supervised, trained on a lot of natural sounding speech. 00:17:58.160 |
And so that definitely told us that this is probably the right way to keep doing audio. 00:18:04.540 |
Even in bark, you have the beginnings of music generation, like you could just put like a 00:18:11.320 |
And it was so cool to see on our discord people were trying to pull music out of a text to 00:18:18.400 |
This tells us like, people are hungry to make music. 00:18:20.960 |
And it's not, it's almost obvious in hindsight, like how wired humans are to make music. 00:18:26.760 |
If you've ever seen like, a little kid, you know, sing before they know how to speak, 00:18:31.460 |
you know, it's like, it's like, this is really human nature. 00:18:34.160 |
And there's actually a lot of cultural forces that kind of cue you to not think to make 00:18:38.760 |
And that's kind of what we're trying to undo. 00:18:42.320 |
And to dive into Suno itself, I think, especially when you go from text to speech, people are 00:18:47.840 |
like, okay, now I got to write the lyrics to a whole song, it's like, that's, that's 00:18:52.800 |
Versus in Suno, you have this empty box, very mid journey, kind of like Dali, like where 00:18:57.720 |
you can just express the vibes, you know, of what you want it to be. 00:19:01.840 |
But then you also have a custom mode where you can say your own lyrics, you can say your 00:19:05.960 |
own rhythm, you can set the title of the song and whatnot. 00:19:09.880 |
What are, how do you see users distribute themselves? 00:19:13.240 |
You know, I'm guessing a lot of people use the easy mode, like, are you seeing a lot 00:19:16.760 |
of power users using the custom mode and maybe some of the favorite use cases that you've 00:19:23.440 |
Yeah, actually, um, more than half of the usage is that expert mode. 00:19:29.240 |
And people really like to get into it and start tweaking things and adding things and 00:19:34.080 |
playing with words or line breaks or different ad lib and people really love it. 00:19:42.680 |
So I think, you know, there's kind of two modes that you can access now. 00:19:45.760 |
One is that single box where you kind of just describe something and then the other is the 00:19:50.600 |
And those kind of fit nicely into two use cases. 00:19:54.600 |
The first use case is what we call nice shit posting. 00:19:57.260 |
And it's basically like, something funny happened, and I'm just going to very quickly make a 00:20:02.480 |
And the example I'll usually give is like, I walk into Starbucks with one of my co founders, 00:20:09.040 |
he gives his name Martin, his coffee comes out with the name Margu. 00:20:13.680 |
And I can in five seconds make a song about this, and it has immortalized it. 00:20:16.880 |
And that Margu song is stuck in all of our heads now. 00:20:21.120 |
And there's levity that you've brought to that moment. 00:20:24.240 |
And the other is that you got just sucked into I need there's this song that's in my 00:20:29.660 |
head and I need to get it out and I'm going to keep tweaking it and listening and having 00:20:33.520 |
ideas and tweaking it until I get the song that I want. 00:20:40.000 |
But I think ultimately, there's so much in between these two things, that it's just totally 00:20:44.520 |
untapped how people want to experience the joys of making music because those two experiences 00:20:50.120 |
are both really joyful in their own special ways. 00:20:54.160 |
And so, we are quite certain that there's a lot in the middle there. 00:20:58.800 |
And then I think the last thing I'll say there that's really interesting is in both of those 00:21:04.160 |
use cases, the sharing dynamics around music are like really interesting and totally unexplored. 00:21:09.920 |
And I think an interesting comparison would be images like we've probably all in the last 00:21:15.880 |
24 hours taken a picture and texted it to somebody. 00:21:19.160 |
And most people are not routinely making a little song and texting it to somebody. 00:21:23.740 |
But when you start to make that more accessible to people, they are going to share music in 00:21:29.700 |
much smaller groups, maybe not at all, but like with one person or three people or five 00:21:38.740 |
And just I think we have ideas of where that goes. 00:21:42.220 |
But it's about kind of spreading joy into these like little, you know, microcosms of 00:21:52.620 |
So I know I made you guys a little Valentine's song, right? 00:21:56.500 |
Like that's not something that happens now because it's hard to make songs for people. 00:22:02.180 |
Well, we'll put that in the audio in here, but also tweeted it out if people want to 00:22:08.260 |
How do you think about the pro market, so to speak? 00:22:11.300 |
Because I think lowering the barrier to some of these things is great. 00:22:15.400 |
And I think when the iPad came out, music production was one of the areas that people 00:22:20.220 |
thought, oh, OK, now you kind of have this like, you know, board that you can bring with 00:22:23.380 |
you and Madlib actually produced this whole album with him and Freddie Gibbs produced 00:22:28.460 |
the whole thing on an iPad and never used a computer. 00:22:31.820 |
How do you see like these models playing into like professional music generation? 00:22:37.140 |
I guess that's also a funny word is like, what's professional music? 00:22:44.300 |
But I'm curious to hear how you're thinking about Suno, too. 00:22:46.980 |
Like is there a second act of Suno that is like going broader into like the custom mode 00:22:52.260 |
and making making this the central hub for music generation? 00:22:55.460 |
I think we intend to make many more modes of interaction with our stuff, but we are 00:23:02.580 |
very much not focused on, quote unquote, professionals right now. 00:23:07.060 |
And it's because what we're trying to do is change how most people interact with music 00:23:10.980 |
and not necessarily make professionals a little bit better, a little bit faster. 00:23:16.580 |
It's not that there's anything wrong with that. 00:23:19.940 |
And I think when we think about what workflows does the average person want to use to make 00:23:26.300 |
I don't think they're very similar to the way professional musicians make music now. 00:23:30.660 |
Like if you pick a random person on the street and you play them a song and then you say 00:23:34.380 |
like, what did you want to change about that? 00:23:36.500 |
They're not going to say like, you need to split out the snare drum and make it drier. 00:23:40.460 |
Like that's just not something that a random person off the street is going to say. 00:23:45.460 |
They're going to give a lot more descriptive things about the thing, about the kind of 00:23:50.020 |
the oeuvre of the song, like something more general. 00:23:53.100 |
And so I don't think we know what all of the workflows are that people are going to want 00:23:57.900 |
We're just like fairly certain that the workflows that have been developed with the current 00:24:02.060 |
set of technologies that professionals use to make beautiful music are probably not what 00:24:09.460 |
That said, there are lots of professionals that we know about using our stuff, whether 00:24:14.300 |
it's for inspiration or sample generation and stuff like that. 00:24:19.500 |
So I don't want to say never say never, like there may one day be a really interesting 00:24:24.740 |
set of use cases that we can expose to professionals, particularly around, I think like custom models 00:24:30.300 |
for trained on custom people's music or, you know, with your voice or something like that. 00:24:35.780 |
But the way we think about broadening how most people are interacting with music and 00:24:40.360 |
getting it to be much more active, a much more active participant, we think about broadening 00:24:46.540 |
it from the consumer side and not broadening it from the producers, from the professional 00:24:53.380 |
Is the dream here to be, you know, I don't know if it's too coarse of a grain to put 00:25:00.380 |
it, but like, is the dream here to be like the mid journey of music? 00:25:04.140 |
I think there are certainly some parallels there because especially what I just said 00:25:09.980 |
about being an active participant, mid journey turns - the joyful experience in mid journey 00:25:16.220 |
is the act of creating the image and not necessarily the act of consuming the image. 00:25:20.620 |
And mid journey will let you then very kind of quickly share the image with somebody. 00:25:25.440 |
But I think ultimately that analogy is like somewhat limiting because there's something 00:25:33.540 |
I think there's two things, one is that there's this really big gap for the average person 00:25:38.140 |
between kind of their tastes in music and their abilities in music that is not quite 00:25:42.920 |
there for most people in images, like most people don't have like innate tastes in images 00:25:51.340 |
And then the other thing, and this is the really big one, is that music is a really 00:25:57.860 |
If we all listen to a piece of music together, we're listening to the exact same part at 00:26:04.100 |
If we all look at the picture in Alessio's background, we're going to look at it for 00:26:09.020 |
two seconds, I'm going to look at the top left where it says Thor, Alessio's going to 00:26:12.940 |
look at the bottom right or something like that, and it's not really synchronous. 00:26:17.940 |
And so when we're all listening to a piece of music together, it's minutes long, we're 00:26:24.300 |
If you go to the act of making music, it is even more synchronous, it is the most joyful 00:26:30.220 |
And so I think that there's so much more to come there that ultimately would be very hard 00:26:38.900 |
We've gone almost 30 minutes without making any music on this podcast, so I think maybe 00:26:48.860 |
We've got a new model that we are kind of putting the finishing touches on. 00:26:54.880 |
And so I can play with it in our dev server, but we've just piped it in here. 00:26:58.500 |
And as you can see, been doing tons of stuff. 00:27:01.820 |
Tell me what kind of song you guys want to make. 00:27:06.740 |
Let's do a country song about the lack of GPUs in my cloud provider. 00:27:23.340 |
Here's where I would tempted to think about like pipelines and think about latency. 00:27:37.260 |
So my cloud ready to compute, but there ain't no GPUs just empty space. 00:27:55.900 |
I've been waiting all day for that render out. 00:28:09.380 |
It's a dark cloud shower, all clouds gone dry, no GPUs to be found. 00:28:24.860 |
I actually don't think this one's amazing or, you know, the next time, but it's fine. 00:28:59.460 |
No GPUs in the cloud, it's a real bad blues I need the power, but there ain't no use. 00:29:31.480 |
I mean, I do want to like do some observations about this, but okay. 00:29:37.620 |
Maybe like, I like like house music, like electronic dance, house music. 00:29:44.760 |
And then maybe we can make it about, I don't know, podcasting about music and music AI 00:29:56.520 |
I'm sure all the demos that you get are very meta. 00:29:59.500 |
There's a lot of, there's a lot of stuff that's meta, yeah, for sure. 00:30:04.740 |
I noticed, for example, that the second song that you played had the word upbeat inserted 00:30:09.140 |
into it, which I assume there's some kind of like random generator of like modifier 00:30:14.860 |
terms that you can just kind of throw on to increase the specificity of the what's being 00:30:23.500 |
So I'll play this and then maybe we'll tweak it with different modifiers. 00:30:27.500 |
So I'll play this and then maybe we'll tweak it with different modifiers. 00:31:12.780 |
And then let's get that, get rid of the word. 00:31:21.780 |
I'm just reading it because people might not be able to see it. 00:31:26.660 |
And then let's like just maybe emphasize, actually, let's emphasize house a little more. 00:31:36.240 |
It's interesting the prompt engineering that you have to invent. 00:31:39.420 |
We've learned so much from people using the models and not us. 00:31:42.900 |
But like, are these like art training artifacts? 00:31:47.220 |
I think this is people being inventive with how you want to talk to a model. 00:32:46.580 |
It's interesting when you generate a song, you generate the lyrics. 00:32:49.940 |
But then if you switch the music under it, like the, you know, the lyrics stay the same. 00:32:54.820 |
And then sometimes like feels like, I mean, I mostly listen to hip hop. 00:32:58.900 |
It's like, if you change the beat, you can not really use the same rhyme scheme, you 00:33:06.580 |
It's a sliding scale though, because, you know, we could do this as a, as a country 00:33:16.500 |
But, but for hip hop, that is definitely true. 00:33:20.140 |
And actually, you know, we, we think about for these models, we think about three important 00:33:26.400 |
It's like, does it sound like a crisply recorded piece of audio? 00:33:30.740 |
Is this an interesting song that gets stuck in my head? 00:33:33.820 |
And we think about the control ability, like how well does it respond to my prompts? 00:33:37.540 |
And one of the ways that we'll test these things is take the same lyrics and try to 00:33:41.220 |
do them in different styles to see how well that really works. 00:33:45.420 |
So let's see the same, I don't know what a beat drop is going to do for country rock. 00:33:50.340 |
So I probably should have taken that out, but let's see what happens. 00:34:08.280 |
Running around through the air, we're podcasting loud. Sharing the beats, spreading the word, a revolution of frequencies, haven't you heard? 00:34:20.280 |
Plug in, tune out, let the music take control. We're on a journey, a never-ending road. From the beats that talk to the melodies that soar. 00:34:44.280 |
I'm gonna read too much into this, but I would say I hear a little bit of kind of electronic music inspired something, and that is probably because beat drop is something that you really only ever associate with electronic music. 00:34:56.280 |
Maybe that's reading too much into it, but should we do one more? 00:35:06.280 |
I guess there's some amount of world knowledge that you don't have, right? Like whatever is in this language model side of the equation is not going to have an Apple Vision Pro in it. 00:35:15.280 |
Let's see. Let's see. How about a blues song about a sad AI wearing an Apple Vision Pro. Gotta be blues, gotta be sad. 00:35:49.280 |
Wearing my Apple Vision Pro. Can't see the stars. I used to feel joy. I used to feel pain. And now I'm just a soul trapped inside this metal frame. 00:36:10.280 |
Oh, I'm singing the blues. Can't you see? This digital life ain't what it used to be. 00:36:28.280 |
Searching for love, but I can't find a soul. Won't you help me? Baby, let my spirit unfold. 00:36:45.280 |
I want to remix that one. And I want to say, I don't want that. I want Chicago blues guitar. 00:36:58.280 |
I don't know, he knows too much. He's the best prompt engineer out here. 00:37:03.280 |
It would be funny to have musicologists play with this and see what they would do. 00:37:17.280 |
I don't know. We try to be very careful not letting you impersonate it. It is possible. That's embarrassing. So let's do... 00:37:33.280 |
I must say, I'm with a broken heart. Where my Apple Vision Pro can't see the stars. 00:37:52.280 |
I used to feel joy. I used to feel joy. I used to feel pain. 00:38:10.280 |
But now I'm just a soul trapped inside this metal frame. Oh, I'm singing the blues. 00:38:25.280 |
Oh, can't you see? This digital life ain't what it used to be. I'm searching for love. 00:38:44.280 |
I can't find a soul. Oh, won't you help me, baby? Let my spirit unfold. 00:38:57.280 |
So, yeah, a lot of control there. Maybe I'll make one more. 00:39:09.280 |
Why is house the word that you have to repeat? 00:39:11.280 |
I just really want to make sure it's house. It's actually - you can't really repeat too many times. You kind of - it gets like - the hypothesis gets like a little too out of domain. 00:39:22.280 |
I must say, I'm with a broken heart. Where my Apple Vision Pro can't see the stars. 00:39:36.280 |
I used to feel joy. I used to feel pain. But now I'm just a soul trapped inside this metal frame. 00:39:50.280 |
Oh, I'm singing the blues. Oh, can't you see? This digital life ain't what it used to be. I'm searching for love. I can't find a soul. Oh, won't you help me, baby? 00:40:16.280 |
Nice. So, yeah, we have a lot of fun with it. 00:40:21.280 |
I'm really curious to see how people are going to use this to resample old songs into new styles. I think that's one of my favorite things about hip hop. 00:40:31.280 |
So many - I mean, a Trap Called Quest, they had the Lou Reed "Walk on the Wild Side" sample, and "Can I Kick It?" Kanye sampled Nina Simone on "Blood on the Leaves." 00:40:41.280 |
It's a lot of production work to actually take an old song and make it fit a new beat, and I feel like this can really help. 00:40:49.280 |
Do you see people putting existing songs, lyrics, and trying to regenerate them in a new style? 00:40:56.280 |
We actually don't let you do that, and it's because if you're taking someone else's lyrics, you didn't own those, you don't have the publishing rights to those, you can't remake that song. 00:41:05.280 |
I think in the future, we'll figure out how to actually let people do that in a legal way, but we are really focused on letting people make new and original music. 00:41:14.280 |
I think there's a lot of music AI, which is Artist A doing the song of Artist B in a new style, let me have Metallica doing "Come Together" by The Beatles or something like that. 00:41:25.280 |
I think this stuff is very viral, but I actually really don't think that this is how people want to interact with music in the future. 00:41:34.280 |
To me, this feels a lot like when you made a Shakespeare sonnet the first time you saw Chad GPT, and then you made another one, and then you made another one, and then you kind of thought, "This is getting old." 00:41:45.280 |
That doesn't mean that GPT is not amazing. GPT is amazing, it's just not for that. 00:41:50.280 |
I kind of feel like the way people want to use music in the future is not just to remake songs in different people's voices. 00:41:59.280 |
You lose the connection to the original artist, you lose the connection to the new artist because they didn't really do it. 00:42:05.280 |
We're very happy to just let people do things that are a flash in the pan and kind of stay under the radar. 00:42:12.280 |
I think that's a good point overall about how I generated anything. I think recently T-Pain did an album of covers, and I think he did War Pigs that people really liked. 00:42:28.280 |
There was a Tennessee Whiskey, which you maybe wouldn't expect T-Pain to do, but people like it. 00:42:34.280 |
I agree, you need to be a certain type of artist to really have it be entertaining to make covers. 00:42:41.280 |
This is great. What else is next for Suno? I think people saw you, first you had The Bark, and then there was a big music-generated push when you did an announcement a couple months ago. 00:42:55.280 |
I think I saw you like 300 times on my Twitter timeline on the same day, so it was going everywhere. 00:43:02.280 |
What's coming up? What are you most excited about in this space, and maybe what are some of the most interesting underexplored ideas that you maybe haven't worked on yet? 00:43:13.280 |
Gosh, there's a lot. I think from the model side, it's still really early innings, and there's still so much low-hanging fruit for us to pick to make these models much, much better, much, much more controllable. 00:43:26.280 |
Much better music, much better audio fidelity. So much that we know about, and so much that, again, we can kind of borrow from the open-source Transformers community that should make these just better across the board. 00:43:42.280 |
From the product side, we're super focused on the experiences that we can bring to people, and so it's so much more than just text to music. 00:43:52.280 |
I'll say this nicely, I'm a machine learning person, but machine learning people are stupid sometimes, and we can only think about models that take X and make it into Y. 00:44:03.280 |
That's just not how the average human being thinks about interacting with music, and so I think what we're most excited about is all of the new ways that we can get people much more actively participating in music. 00:44:15.280 |
And that is making music, not only with text, maybe with other ways of doing stuff. That is making music together, if you want to be reductive and think about this as a video game. 00:44:24.280 |
This is multiplayer mode, and it is the most fun that you can have with music. 00:44:29.280 |
Honestly, I think there's a lot of – it's timely right now. I don't know if you guys have seen, UMG and TikTok are butting heads a little bit, and UMG has pulled – 00:44:42.280 |
You know, the way we think about this is, you know, I think maybe they're both right, maybe neither is right. Without taking sides, this is kind of figuring out how to divvy up the current pie in the most fair way. 00:44:54.280 |
And I think what we are super focused on is making that pie much bigger and increasing how much people are actually interested in music and participating in music. 00:45:03.280 |
And, you know, as a very broad heuristic, the gaming industry is 50 times bigger than the music industry. 00:45:10.280 |
And it's because gaming is super active. And music, too much music is just passive consumption. 00:45:16.280 |
And so we have a lot of experiments that we are excited to run for the different ways people might want to interact with music that is beyond just, you know, streaming it while I work. 00:45:28.280 |
Yeah, I think a minimum, you guys should have a Twitch stream that is just like a 24-hour radio session that – have you ever come across Twitch Plays Pokemon? 00:45:38.280 |
Where it's kind of like the Twitch – basically like everyone in the chat, in the Twitch chat, can vote on like the next action that the game state makes. 00:45:47.280 |
And they kind of wired that out to a Nintendo emulator and played Pokemon like the whole game through the collaborative thing. 00:45:54.280 |
It sounds like it should be pretty easy for you guys to do that, except for the chaos that might result. But like, I mean, that's part of the fun. 00:46:02.280 |
I agree 100%. Sorry. Yeah, like one of my like key projects or pet projects is like, what does it mean to have a collaborative concert? 00:46:12.280 |
Maybe where there is no artist and it's just the audience, or maybe there is an artist, but there's a lot of input from the audience. 00:46:18.280 |
And, you know, if you were going to do that, you would either need an audience full of musicians, or you would need an artist who can really interpret the verbal cues that an audience is giving, or nonverbal cues. 00:46:31.280 |
But if you can give everybody the means to better articulate the sounds that are in their heads toward the rest of the audience, like, which is what generative AI basically lets you do, you open up way more interesting ways of having these experiences. 00:46:45.280 |
And so I think, yeah, like the collaborative concert is like one of the things I'm most excited about. I don't think it's coming tomorrow, but we have a lot of ideas on what that can look like. 00:46:58.280 |
Yeah, I feel like it's one stage before the collaborative concert is turning Suno into a continuous experience rather than like a start and stop motion. I don't know if that makes sense. 00:47:13.280 |
You know, as someone who was like a casual interest in DJing, like when do we see Suno DJs, right? Like that can continuously segue into like the next song, the next song, the next song. 00:47:24.280 |
I think soon. And then maybe you can turn it collaborative. I think so. Okay. Maybe part of your roadmap. You teased a little bit your V3 model. I'm just wondering like how you incorporate like user feedback, right? 00:47:35.280 |
Like you have the classic thumbs up and down buttons, but like there's so many dimensions to the music. Like, you know, I didn't get into it, but some of the voices sounded more metallic. 00:47:47.280 |
And sometimes that's on purpose, sometimes not. Sometimes there are kind of weird pauses in there. I could go in and annotate it if I really cared about it, but I mean, I'm just listening. So I don't, but there's a lot of opportunity. 00:47:59.280 |
We are only scratching the surface of figuring out how to do stuff like that. And for example, the thumbs up and the thumbs down for other things like sharing telemetry on plays, all of these things are stuff that in the future, I think we would be able to leverage to make things amazing. 00:48:18.280 |
And then I imagine a future where, you know, you can have your own model with your own preferences. And the reason that's so cool is that you kind of have control over it and you can teach it the way you want to. 00:48:33.280 |
And you know, the thing that I would liken this to is like a music producer working with an artist giving feedback and like this is now a self-contained experience where you have an artist who is infinitely flexible, who is able to respond to the weird feedback that you might give it. 00:48:49.280 |
We don't have that yet. Everybody's playing with the same model, but there's no technological reason why that can't happen in the future. 00:48:55.280 |
We had a few more notes from random community tweets. I don't know if there's any favorite fans of Suno that you have or whatnot. DHH, obviously, notorious tweeter and crowd inflamer, I guess. He tweeted about you guys. I saw Blau is an investor. I think Karpathy also tweeted something. 00:49:22.280 |
He just made that song and it just speaks to him. And I think this is exactly the thing that we are trying to tap into that you can think of it. This is like a super, super, super micro genre of one person who just really liked that song and made it and shared it. 00:49:35.280 |
And it does not speak to you the same way it speaks to him. That song really spoke to him. And I think that's so beautiful. And that's something that you're never going to have an artist able to do that for you. 00:49:46.280 |
And now you can do that for yourself. And it's just a different form of experiencing music. I think that's such a lovely use case. 00:49:55.280 |
Any fun fan mail that you got from musicians or anybody that really was a funny story to share? 00:50:04.280 |
We get a lot. And it's primarily positive. And I think people kind of, on the whole, I would say people realize that they are not experiencing music in all of the ways that are possible. And it does bring them joy. 00:50:19.280 |
I'll tell you something that is really heartwarming is that we're fairly popular in the blind and vision impaired community. And that makes us feel really good. 00:50:30.280 |
And I think, you know, very roughly, without trying to speak for an entire community, you have lots of people who are really into things like mid journey, and they get a lot of benefit and joy, and sometimes even therapy out of making images. 00:50:42.280 |
And that is something that is not really accessible to this fairly large community. And what we've provided, no, I don't think the analogy to mid journey is perfect, but what we've provided is a sonic experience that is very similar. 00:50:54.280 |
And that speaks to this community. And that is community with the best ears, the most exacting, the most tuned. And so, yeah, that definitely makes us feel warm and fuzzy inside. 00:51:07.280 |
Yeah, excellent. I mean, it sounds like there's a lot of exciting stuff on your roadmap. I'm very much looking forward to sort of the infinite DJ mode, because then I can just kind of play that while I work. 00:51:20.280 |
I would love to get your overall takes, like kind of zooming out from Suno itself, just your overall takes on the music generation landscape. Like, what should people know? I think you obviously have spent a lot more time on this than others. 00:51:33.280 |
So in my mind, you shout out Volley and the other sort of Google type work in your Read Me and Bark. What should people know about what Google is doing? What Meta is doing? Meta released Seamless recently, an audio box. 00:51:50.280 |
And what are the other, how do you classify the world of audio generation in the broader sort of research community? 00:51:57.280 |
I think people largely break things down into three big categories, which is music, speech and sound effects. There's some stuff that is crossover, but I think that is largely how people think about this. 00:52:09.280 |
The old style of doing things still exists, kind of single purpose models that are built to do a very specific thing instead of kind of the new foundation model approach. 00:52:19.280 |
I don't know how much longer that will last. I don't have like tremendous visibility into what happens in the big industrial research labs before they publish. 00:52:29.280 |
Specifically for music, I would say there's a few big categories that we see. There is license-free stock music. 00:52:37.280 |
So this is like, how do I background music, the B-roll footage for my YouTube video or for full feature production or whatever it is. 00:52:47.280 |
And there's a bunch of companies in that space. There's a lot of AI cover art. 00:52:52.280 |
So how do I cover different existing songs with AI? And I think that's a space that is particularly fraught with some legal stuff. 00:53:03.280 |
And we also just don't think it's necessarily the future of music. 00:53:07.280 |
There is kind of net new songs as a new way to create net new music. That is the corner that we like to focus on. 00:53:16.280 |
And I would say the last thing is much more geared toward professional musicians, which is basically AI tools for music production. 00:53:24.280 |
And you can think many of these will look like plugins to your favorite DAW. 00:53:28.280 |
Some of them will look like, you know, the greatest stem splitter that the market has ever seen. 00:53:36.280 |
The current stem splitters are the state-of-the-art are all AI based. 00:53:40.280 |
That is a market also that has just a tremendous amount of room to grow. If you just think about, I would say music has evolved. 00:53:47.280 |
Somebody told me this recently that if you actually think about it, music has evolved recently. 00:53:51.280 |
It's just much more things that are sonically interesting at a very local level and much less like chord changes that are interesting. 00:53:59.280 |
And when you think about that, like that is something that AI can definitely help you make a lot of weird sounds. 00:54:04.280 |
And this is nothing new. There was like a theremin at some point that people like put an antenna and try to do this with. 00:54:09.280 |
And so like, I think this is just a very natural extension of it. 00:54:12.280 |
So that's how we see it. At least, you know, there's a corner that we think is particularly fulfilling, particularly underserved, and particularly interesting. 00:54:24.280 |
Awesome. I know we covered a lot of things, I think, before we wrap. 00:54:29.280 |
You have written a blog post that can show about GoHard's Law of Impact in ML, which is, you know, when you measure something, 00:54:37.280 |
then the thing that you measure is not a good metric anymore because people optimize for it. 00:54:42.280 |
Any thoughts on how that applies to like LLMs and benchmarks and kind of the world we're going in today? 00:54:49.280 |
Yeah, I mean, I think it's maybe even more apropos than when I originally wrote that, 00:54:54.280 |
because so much we see so much noise about pick your favorite benchmark. 00:55:00.280 |
And this model does slightly better than that model. 00:55:02.280 |
And then at the end of the day, actually, there is no real world difference between these things. 00:55:06.280 |
And it is really difficult to define what real world means. 00:55:10.280 |
And I think to a certain extent, it's good to have these objective benchmarks. 00:55:16.280 |
But at the end of the day, you need some acknowledgement that you're not going to be able to capture everything. 00:55:22.280 |
And so, at least at Suno, to the extent that we have corporate values, 00:55:27.280 |
if we don't, we're too small to have corporate values written down. 00:55:30.280 |
But something that we say a lot is aesthetics matter. 00:55:32.280 |
That the kind of quantitative benchmarks are never going to be the be-all and end-all of everything that you care about. 00:55:41.280 |
And as flawed as these benchmarks are in text, they're way worse in audio. 00:55:48.280 |
And so, aesthetics matter basically is a statement that like at the end of the day, 00:55:53.280 |
what we are trying to do is bring music to people that makes them feel a certain way. 00:55:58.280 |
And effectively, the only good judge of that is your ears. 00:56:04.280 |
And it is a good idea to try to make better objective benchmarks, but you really have to not fall prey to those things. 00:56:13.280 |
I can tell you, it's kind of another pet peeve of mine. 00:56:19.280 |
Like I always said, economists will make really good or do make really good machine learning engineers. 00:56:26.280 |
And it's because they are able to think about stuff like Goodhart's Law and natural experiments and stuff like this 00:56:31.280 |
that people with machine learning backgrounds or people with physics backgrounds like me often forget to do. 00:56:36.280 |
And so, yeah, I mean, I'll tell you at Kensho, we actually used to go to big econ conferences sometimes to recruit. 00:56:44.280 |
And these were some of the best hires we ever made. 00:56:47.280 |
Interesting. Because there's a little bit of social science in the human feedback. 00:56:52.280 |
I think it's not only the human feedback. I think you could think about this just in general. 00:56:57.280 |
You have these like giant, really powerful models that are so prone to overfitting, 00:57:01.280 |
that are so poorly understood, that are so easy to steer in one direction or another, 00:57:07.280 |
And your ability to think about these problems from first principles, 00:57:11.280 |
instead of like getting down into the weeds or only math, 00:57:14.280 |
and to think intuitively about these problems is really, really important. 00:57:18.280 |
I'll give you just one of my favorite examples. It's a little old at this point. 00:57:22.280 |
But if you guys remember like SQUAD and SQUAD2, the question answering data set. 00:57:26.280 |
The Stanford question answering data set, yeah. 00:57:28.280 |
The benchmark for SQUAD1, eventually the machine learning models start to do as well as a human can on this thing. 00:57:41.280 |
And it takes somebody very clever to say, "Well, actually, let's think about this for a second. 00:57:46.280 |
What if we presented the machine with questions with no answer in the passage?" 00:57:50.280 |
And it immediately opens a massive gap between the human and the machine. 00:57:54.280 |
And I think it's like first principles thinking like that, that comes very naturally to social scientists, 00:58:00.280 |
that does not come as naturally to people like me. 00:58:04.280 |
And so that's why I like to hang out with people like that. 00:58:07.280 |
Well, I'm sure you get plenty of that in Boston. 00:58:11.280 |
And as an econ major myself, it's very gratifying to hear that we have a perspective to contribute. 00:58:17.280 |
Oh, big time, big time. I try to talk to economists as much as I can. 00:58:21.280 |
Excellent. Awesome, guys. Yeah, I think this was great. 00:58:25.280 |
We got live music. We got discussion about generative models. 00:58:29.280 |
We got the whole nine yards. So thank you so much for coming on.