back to indexHow Does TikTok Actually Work? (It’s Scary…)

Chapters
0:0 Decoding TikTok’s Algorithm
44:41 Should I quit social media even if I don’t live on the slope of terribleness?
49:38 Is it bad to use TikTok to keep up with the news?
53:45 How does quantum computing relate to AI?
62:27 Trying to become a great composer
69:28 Tips for dealing with growing kids
78:27 Three Articles about TikTok (and Three Sighs of Exasperation)
00:00:00.260 |
The big news in the world of social media recently is the announcement made last week 00:00:05.860 |
that U.S. interest would be taking over operation of TikTok in America. 00:00:11.140 |
Now, this deal is complicated, so I'm going to read you here from a New York Times article that's explaining it. 00:00:18.500 |
The software giant Oracle will oversee the security of Americans' data 00:00:22.880 |
and monitor changes and updates to TikTok's powerful recommendation technology 00:00:27.060 |
under a new deal to avert a ban of the service, according to a senior White House official. 00:00:32.100 |
A copy of the algorithm, the recommendation engine that powers the app's addictive feed of short videos, 00:00:38.260 |
will be licensed from China to an American investor group that will oversee the app in the United States, 00:00:43.880 |
All right, so we're going to have this new entity, TikTok in America, 00:00:46.980 |
that will be created to run and monitor all of this, 00:00:49.560 |
and American investors will have an 80% share in it. 00:00:57.240 |
Well, TikTok's mysterious and acclaimed algorithm seems to know so much about its users 00:01:02.280 |
and is so good at serving up videos that meet their interest 00:01:05.260 |
that there's a general fear that a foreign government could be gaining too much influence over our population. 00:01:15.080 |
Is this something that can be fixed once the right people have control over it? 00:01:20.560 |
In other words, is our problem with social media today largely one of bad algorithms? 00:01:28.320 |
I'll put on my computer scientist hat and take a closer look at what's going on underneath the hood of services like TikToks, 00:01:34.580 |
and then I'll put on my digital ethicist hat to use what we learn to better understand what role these services should have 00:01:41.680 |
or should not have in both our lives and our civil culture. 00:01:44.240 |
As always, I'm Cal Newport, and this is Deep Questions. 00:02:00.380 |
Today's episode, Decoding TikTok's Algorithm. 00:02:05.600 |
As a computer science professor who specializes in algorithm theory, 00:02:09.840 |
I mean, this is what I studied in grad school is distributed algorithm theory. 00:02:13.040 |
It's what most of my academic CS papers are on. 00:02:15.260 |
I teach algorithms at both the undergraduate and graduate level. 00:02:18.800 |
So it's been sort of amusing and pleasing for me to see how often I'm hearing these days 00:02:25.180 |
non-computer science people use the word algorithm. 00:02:28.980 |
It's sort of like My Little Secret World is one that everyone has been exposed to. 00:02:33.440 |
But when it comes to discussions of social media platforms like TikTok, 00:02:38.260 |
this term algorithm has seemed to take on some sort of almost mystical power and capabilities. 00:02:46.660 |
I want to play a clip here of a sort of recent discussions from last week of in the news, 00:02:58.060 |
And they will basically be helping TikTok to retrain its algorithm and also to make sure 00:03:05.900 |
that U.S. data of people, all the Americans, you know, the roughly half the country using 00:03:12.760 |
And then I think we have another clip, right? 00:03:17.260 |
So there we see the way the algorithm, putting square quotes around that, is being discussed 00:03:36.180 |
And we want Americans to be in control of it, that we're going to retrain it once it's over 00:03:44.020 |
And with that sort of supervision, we can have some assurance that whatever it is that we are 00:03:50.440 |
uncomfortable about happening with a service like TikTok, we can have some assurance that 00:03:56.020 |
But what is this algorithm that is at the core of this new deal? 00:04:00.540 |
Well, I want to start with the mental model that I think most people have when we talk about 00:04:10.140 |
I think most people imagine it's basically like a digital version of a newspaper editor. 00:04:17.300 |
So we have like a newspaper editor who makes decisions. 00:04:24.360 |
We imagine an algorithm like we're building a computer version of that, a digital version 00:04:28.900 |
of that that can work at like really high capacity and make like building a sort of like a custom 00:04:35.040 |
So a computer program that makes decisions about what we see in the same way that like a computer 00:04:44.980 |
So in that model, if that's our mental model for algorithms, transferring control of the 00:04:50.260 |
TikTok recommender algorithm to U.S. control makes sense, right? 00:04:53.120 |
We wouldn't want a foreign country playing the role of the editor for the social media newspapers 00:04:59.080 |
that half of the U.S. population is receiving through TikTok. 00:05:02.360 |
We want an editor who has our values, who has American values, who's not going to be doing not 00:05:07.600 |
only promoting values of non-American, but maybe conspiratorially trying to mature and candidate 00:05:12.060 |
style influence Americans to think one way or the other, right? 00:05:14.580 |
We would not be happy if the New York Times was edited in a dark room in Beijing. 00:05:18.840 |
And so we sort of feel with this mental model, we want our algorithm for this popular service 00:05:27.480 |
Well, I'm going to take out my computer scientist hat here, which as Jesse and I have discussed 00:05:37.520 |
And we're going to take a closer look at what really is going on. 00:05:41.740 |
And we're going to correct our mental model for thinking about these social media algorithms. 00:05:46.180 |
So what is Oracle going to discover when they finally get their hands on TikTok's algorithm 00:05:52.800 |
You know, like most private companies in the social media space, ByteDance has not published 00:05:58.500 |
detailed looks at exactly how their systems work. 00:06:04.120 |
But it's not like they've been completely secretive either. 00:06:07.160 |
We have a pretty good sense about more or less how TikTok works. 00:06:12.420 |
One, ByteDance researchers in the last five years, they published two different major papers 00:06:18.440 |
in academic venues that looked at major components of their system architecture they use for multiple 00:06:26.980 |
And two, we know how recommendation, the evolution of recommendation system algorithms, that this 00:06:33.920 |
is something we just know from a data science, computer science perspective. 00:06:37.660 |
There's only so many ideas you can be pulling from. 00:06:41.520 |
No one suspects there's an entirely new idea that they're using. 00:06:47.200 |
I think we can create a pretty reasonable understanding of what's actually going on here. 00:06:54.180 |
First of all, when we say algorithm, that's not really the right word. 00:06:57.740 |
We should be talking about recommender architecture or recommender system architecture. 00:07:01.980 |
So what drives TikTok is a massive system that stores and updates information about hundreds 00:07:08.600 |
of billions of videos and whatever it is, over one to two billion users. 00:07:15.640 |
And in the end, what this system has to do is for each individual user, as they do each 00:07:20.560 |
swipe, is have a recommendation of what next video to show them. 00:07:23.380 |
So it's actually a very large global distributed system, not a single algorithm that we need 00:07:29.160 |
Now, we know a lot about building these type of systems. 00:07:31.980 |
This idea of using computer-driven recommendation systems really began to get a lot of attention 00:07:37.800 |
in the late 90s and into the 2000s with Amazon and then Netflix really being pioneers and figuring 00:07:43.540 |
out how do we use data about stuff we want to show or sell to people, data about the people 00:07:48.740 |
themselves, put these two together, and make good recommendations. 00:07:52.380 |
So by the 2010s, we were getting pretty good at these. 00:07:54.960 |
What type of recommender architecture does something like TikTok probably use? 00:07:59.560 |
Almost certainly, they're using a general approach that's known as a two-tower system. 00:08:05.860 |
I want to set the stage here before we get to the two towers that they probably utilize. 00:08:12.340 |
All right, so imagine I am building a system to recommend short videos to you. 00:08:23.380 |
One way I might want to do this would be to come up with a master list of properties. 00:08:30.640 |
And I want this to be on the order of a thousand, a few hundred, maybe a couple thousand, but kind 00:08:36.920 |
of in that order, right, of a few hundred, maybe a thousand properties, right, that I can 00:08:42.320 |
go to any particular video and I can assess it on each of these properties, right? 00:08:46.740 |
So maybe you have a property that's like, it's funny or not, or there's a property that 00:08:51.140 |
is, you know, the content is conservative, or there's a, there's a property that's like, 00:08:54.620 |
if this content involves like South Korean K-pop music. 00:08:58.880 |
Now it could be more complicated than just singular properties you have or don't. 00:09:05.580 |
Like maybe one of my properties I assess videos on is, okay, it's like conservative K-pop content 00:09:12.740 |
Because maybe it turns out like that's a pretty important property. 00:09:16.300 |
It has like a pretty limited list of properties that captures enough stuff about videos, like 00:09:22.920 |
the stuff that people really tend to care about, that I can start to make good recommendations. 00:09:26.680 |
And now I'm not going to necessarily just have like a bit for each, is it funny or not? 00:09:30.840 |
But maybe I have like a one to 10 or one to a hundred scale where, you know, how, how close, 00:09:38.600 |
And I go through each of the videos in my system and I have my big list of properties and 00:09:44.600 |
Now assume when you join the service, I say, okay, you're a new user. 00:09:48.400 |
I'm going to sit down and have like a detailed user interview with you. 00:09:50.880 |
I'm going to talk to you about all sorts of stuff. 00:09:56.140 |
And what I'm going to do is take that same list of properties. 00:09:58.780 |
And now I'm going to go through and for each of those, try to measure, write down with a 00:10:04.600 |
number, how important each of those properties is to you. 00:10:07.880 |
In other words, when it comes to videos you like, do you want a lot of this property or 00:10:12.620 |
So like I find out, oh, you love funny stuff. 00:10:15.840 |
So I'm going to, in that same category, I'll put down a pretty high number for, you know, 00:10:21.220 |
So you're like, I do not want to see conservative content. 00:10:23.520 |
So I'm going to put a zero, a big zero there, right? 00:10:30.340 |
They're a little, this tag we put on every video. 00:10:35.140 |
We say, okay, here's how much they care about each of these properties. 00:10:40.260 |
Don't ask me how, but just say I was able to do that. 00:10:43.360 |
This now gives me a pretty good way of recommending videos to you. 00:10:46.980 |
What I can do is say, look, here's what I want to go find. 00:10:51.160 |
But what I want to look for is videos that strongly overlap properties you care about. 00:10:57.580 |
So where you have like a high score in a property, if the video also has a high score in that property, 00:11:01.960 |
I'm like, that looks good, but I also want to make sure that it doesn't have high scores 00:11:08.240 |
So the more it matches your preferences, so it has the things you like, and it doesn't 00:11:14.700 |
have the things you don't like, the better candidate I'm going to say that video is to 00:11:19.640 |
And so that's what I'm going to do to figure out a video to show to you. 00:11:23.740 |
If I could do that, the recommendations I make are going to seem like really good. 00:11:30.560 |
Like I, I like baseball and, you know, um, like, like left-wing politics and I'm really 00:11:36.780 |
into Lord of the rings and my God, there was a video somewhere in these like a hundred, a 00:11:41.120 |
billion, you know, videos where you have Frodo making a sort of like left-wing argument using 00:11:49.040 |
You're like, wow, you know me really well, but it's what we really know is this video matched 00:11:54.520 |
a bunch of things you cared about and didn't have a lot of the things that you don't. 00:11:59.660 |
A two tower recommendation system is one way of building an automated system to do something 00:12:06.220 |
Now, the way this works, I'm going to, God help us draw a picture, Jesse. 00:12:10.980 |
This is like an impossible thing to diagram and I'm a, I'm a bad, uh, actually I take that 00:12:17.920 |
Um, or I'm using the wrong pencil, but you can all tell I'm a fantastic technology technologist. 00:12:25.540 |
I couldn't even figure out how to use an Apple pencil. 00:12:29.520 |
So what I'm going to do here is I'm going to draw a picture here, uh, make the two tower 00:12:36.720 |
So on the screen here, I'm going to draw two on the right over here, a tower. 00:12:45.060 |
This one is going to be inside of it is going to be a collection of machine learning type of 00:12:53.320 |
Uh, they, they often think about layers, neural networks and transformers and embedding matrices. 00:12:59.100 |
And it's just mathematical manipulation type stuff that's complicated, but you don't need 00:13:03.520 |
So what we have is input to this, this tower is we have all of these videos. 00:13:08.760 |
So people have just uploaded all of these videos. 00:13:14.440 |
And then we're going to take these videos and they're one by one, we input to this tower 00:13:20.260 |
and what's going to come out on the other side of the tower is going to be a property list 00:13:29.240 |
And I'm going to indicate that with, I don't know, I'll just put like a, a big purple box. 00:13:34.660 |
So that purple box is a, uh, it's a vector of numbers, but just think of it as if we have 00:13:39.840 |
our thousand categories and it has a number for each of it. 00:13:42.040 |
So it's just describing the video using whatever sort of this sort of master list of things 00:13:51.180 |
And in the end, we'll have this, think of it as like a big database of billions of videos. 00:13:56.480 |
And we can do this like when they're uploaded, like the videos don't change. 00:13:59.180 |
And we have a way of describing, uh, this big, long list of, of cat numbers that describes 00:14:10.540 |
And it gives us this description of the, this videos with the properties we care about. 00:14:15.420 |
So what's tower number two is a two tower situation here. 00:14:19.180 |
And there's no way, by the way, Jesse, that this terminology did not come out of Lord of 00:14:24.900 |
Like, do you even know that reference two towers is the name of a Lord of the rings book? 00:14:30.020 |
Um, no, you spend too much time doing sports. 00:14:33.880 |
You gotta, I might've, but I would have lost it. 00:14:38.880 |
We have a second tower and the way this tower works is now our input is going to be 00:14:47.380 |
So it's like a description of everything we know about them, including primarily, and this 00:14:52.000 |
is the key thing, their behavior on the platform. 00:14:53.760 |
So things they've watched before, uh, things they haven't watched before how long they watch 00:14:58.940 |
So it's all this information about these people. 00:15:01.160 |
And we run each of the people through, they have their own tower, which again, inside 00:15:05.540 |
There's neural networks, there's transformers, there's embedding matrices. 00:15:11.580 |
And what we get on the other side is it will also explain, describe each person with their 00:15:18.660 |
list of when it comes to these same properties. 00:15:20.080 |
Hey, how much do they care about each of these things? 00:15:22.840 |
So we have a common way, one tower that does nothing but describe videos. 00:15:29.600 |
And one tower that does nothing but describe the interest of users. 00:15:35.360 |
And the key thing is, is we describe both these things with the same way, the same list of categories. 00:15:47.720 |
This is done in a largely like semi-supervised manner using machine learning techniques. 00:15:52.420 |
So it's humans don't sit down and decide what goes in each of these, you know, what should 00:16:02.920 |
Instead, what we do is we train both of these towers at the same time in the same ways we train 00:16:07.540 |
other sort of neural network-based systems like language models or visual vision models or other 00:16:12.320 |
And here, the data we have, the train it is we have a lot of examples of, okay, we know this 00:16:20.760 |
You can use that data to train these things together. 00:16:23.480 |
And what is the goal that you're training them for? 00:16:26.140 |
You say, come up with, I don't know how you're creating these categories and these numbers 00:16:36.160 |
But what I want to see is the vectors you use, the descriptions of things that people 00:16:43.120 |
care about should be pretty close to the vectors you use to describe the stuff we know they 00:16:48.520 |
like and not too close to the things they don't like. 00:16:51.760 |
So I don't know what's in them, but I have a bunch of examples of stuff that real people 00:16:55.880 |
did and what videos they really like or don't like. 00:16:58.880 |
And I want you to keep nudging and changing your internal descriptions until you get pretty 00:17:03.840 |
good at describing people and describing things in such a way that the vector describing the 00:17:10.360 |
people is close to the things they like and not close to the things they don't. 00:17:15.180 |
We don't know how in these two-tower recommendation systems what they're looking at, what these neural 00:17:21.040 |
networks and transformers, et cetera, what they're noticing in these videos, what they're 00:17:30.240 |
That when we test it and say, okay, we know this user likes this video, we say, yeah, these 00:17:36.280 |
And we know this user doesn't like this video. 00:17:41.560 |
They learn in this diagram here, the pink stuff. 00:17:45.380 |
They learn some useful way of describing users and describing videos so that it does a pretty 00:17:55.700 |
So then if we step back, how does the final recommendation work? 00:18:00.680 |
So again, now we can draw from some of these architecture papers that ByteDance themselves 00:18:05.620 |
Well, what typically happens in these systems is you have, you have so many, you have so 00:18:11.600 |
many of these items, so many videos that what you do is you say, okay, we're going to do 00:18:16.660 |
like a really rough first path to get some candidates of what to show the user. 00:18:21.180 |
And we'll use this entirely something like a distance metric, like a way of just, here's 00:18:29.420 |
There's different ways of measuring this that you can do pretty quickly mathematically. 00:18:32.240 |
And we're just going to go through and grab a bunch of videos that are, have 00:18:36.280 |
They're close by the sort of mathematical notion of close to the user's, uh, to the 00:18:44.360 |
And then there's a little bit of proprietary stuff at the end is how do we then rank these 00:18:49.520 |
candidates and describe what's the actual one to return. 00:18:51.500 |
And that's actually a place where recommendation systems can have a little bit of human, human 00:18:56.860 |
oriented heuristics and rules of thumb in this final step where you're like, okay, here's 00:19:05.560 |
That's where you can throw in some actually like hand coded, like final little rules or 00:19:10.180 |
tweaks or rules of thumb that would kind of happen at the end. 00:19:15.280 |
So why is tech talk so uniquely successful if that's an architecture that like other systems 00:19:20.520 |
use, we, Spotify probably uses something like that. 00:19:23.100 |
Um, some other social platforms are using something like this. 00:19:29.060 |
Well, we kind of have an answer to that, um, as well. 00:19:33.120 |
There seems to be a few things going on here, right? 00:19:38.080 |
So short form video is a best case scenario for building one of these recommendation systems, 00:19:50.860 |
Doesn't have to deal with other complicating factors that other social services have like 00:19:56.380 |
your friend graph and who you follow and trying to like mix in the stuff you said you're interested 00:20:02.080 |
The algorithm thinks you're interested in tech talk basically ignores that because everyone 00:20:10.160 |
We're just showing you things that we think you'll like nothing else matters. 00:20:15.040 |
Second, because they're short, you get a lot of feedback and average tech talk user might 00:20:20.920 |
go through 30 plus videos in a typical session, each one generating feedback about what they 00:20:26.780 |
So you have a huge amount of data with which to get better and better at making these recommendations. 00:20:32.940 |
I might watch one series and one movie in a given week. 00:20:40.160 |
And I'm not going to try, by the way, if I'm on Netflix, I'm not going to try most of the 00:20:44.500 |
I'm probably going to end up watching something someone told me about anyway. 00:20:47.440 |
So I get very slow stream of data if I'm a service like Netflix, but TikTok is optimal 00:20:53.020 |
because you only can look at what they show you. 00:20:55.160 |
So you're giving them feedback on every single recommendation they get. 00:20:59.200 |
The format is super well suited for these types of systems to work really well. 00:21:06.360 |
The other part of the advantage is actually architectural. 00:21:08.600 |
It's a really smart and powerful distributed system that ByteDance actually built for their 00:21:16.640 |
So one of the things that they do that's really impressive is they can update. 00:21:22.160 |
They update the training of the user tower almost in real time. 00:21:27.800 |
They don't just train these two towers once, then go deploy it. 00:21:32.280 |
As you're using the app, you're getting more data. 00:21:36.540 |
Which videos did you watch and how long did you watch them? 00:21:40.440 |
This is really pretty amazing from a distributed systems point of view that can essentially 00:21:44.520 |
be constantly trying to retrain your part, the user tower piece, using this new data so 00:21:54.660 |
But the way they do it is with this massive distributed system where it's fragmented among 00:22:00.860 |
There's probably a system near you that's working on it. 00:22:02.880 |
In the US, most of this is an Oracle Cloud Infrastructure report. 00:22:05.780 |
So there's probably some local machine doing it. 00:22:07.760 |
And then they transfer over the new train parameters to the production model that's actually making 00:22:14.900 |
And there's this whole fault tolerance system they have built up so that if this gets partitioned, 00:22:19.780 |
It's really hard computer engineering, but it allows them to continually update how it labels 00:22:25.880 |
you and what you care about almost immediately in reaction to the stuff you're doing. 00:22:31.260 |
This is what gives TikTok, for example, its amazing cold start capability, where if you're 00:22:35.980 |
a new TikTok user, you just start watching things and swiping and within 10 minutes, you're like, 00:22:41.700 |
how is this already showing me stuff that I really care about? 00:22:44.040 |
It's because they built this architecture that can retrain parts of the towers in real time 00:22:50.560 |
The other thing we know they've done in their system is that it's not a pure user-based, history-based 00:22:59.520 |
They have a parallel system that's doing nothing but studying what's popular. 00:23:04.360 |
Hey, what's doing well on our network, maybe worldwide, or what's doing well in our network 00:23:09.460 |
in a particular region or among like a general group of users? 00:23:15.520 |
They call this the short-term profile versus the long-term profile, which is the user description. 00:23:22.880 |
So when they're trying to figure out what to show you, yeah, there's a bunch of candidates 00:23:29.500 |
But there's also candidates that maybe are a looser fit to your expressed interest, like 00:23:34.700 |
a reasonable fit, but a much looser fit, but are trending and are really popular right now. 00:23:41.060 |
And then so you can get shown, not everything you're being shown is just, here's the best 00:23:44.660 |
match to what you've shown interest in before. 00:23:46.240 |
It's also like, hey, this thing is really popular right now. 00:23:53.560 |
And then this becomes a feedback mechanism that allows you to see things that aren't like 00:24:01.980 |
And it begins, you watch it for a while, and it allows the model when it's describing you 00:24:06.380 |
to sort of learn about other interests you may or may not have. 00:24:13.720 |
You get this mix of, oh, this is straight shot matching to one of my clear interests, 00:24:18.460 |
but also like, oh, this is weird and kind of compelling, but kind of off the wall. 00:24:22.120 |
And maybe half of those things you see, you end up watching them. 00:24:25.320 |
It's it putting in this sort of real-time popular stuff as well. 00:24:28.640 |
So they have sort of a secret sauce for mixing those two things together. 00:24:37.720 |
It's a system that was built at the highest level in a format, short video, one-by-one 00:24:43.220 |
algorithm recommendation that is perfect for this type of recommendation system. 00:24:47.840 |
You put those two things together, and the whole thing seems pretty eerie. 00:24:54.080 |
It knows more about me than I thought I knew about myself. 00:25:02.780 |
These are a well-implemented distributed system. 00:25:07.540 |
There is no magic description of a newspaper editor in somewhere that you can tweak. 00:25:11.680 |
Just a really well-built distributed system that runs these machine learning-based categorization 00:25:20.720 |
Well, modern recommendation architectures, like the one run by TikTok, are not digital newspaper 00:25:27.320 |
They're not things that we can easily configure to reflect particular values or interest or philosophies. 00:25:33.560 |
The machine learning techniques used in these two tower architectures are completely agnostic 00:25:43.920 |
It could be, you know, descriptions of shopping behavior, moving watching behavior. 00:25:50.360 |
They've just been optimized in a relentless training model for, I am assigning list of 00:25:58.940 |
And if these numbers are close to the numbers for this thing over here, the user, and the 00:26:05.220 |
system tells me that's good, then I think my numbers are good. 00:26:07.680 |
And if the system tells me they're bad, then I adjust how I do it until the system tells me 00:26:13.700 |
There is no visibility into how things are being described or what matters or what the 00:26:18.320 |
It's just trying to win this training game of I don't know in advance as the two towers, you 00:26:25.120 |
So I better have described them in a way that ends up matching the things that they liked. 00:26:28.300 |
The way these systems actually work in terms of if we want to think about what are they 00:26:33.180 |
actually doing, if you talk to a machine learning or data scientist, they'll say, yeah, what these 00:26:37.680 |
techniques do, you give them enough data, what they're trying to do mathematically is build 00:26:43.600 |
approximations, mathematical approximations of whatever underlying process or systems best 00:26:49.080 |
describe the patterns it's fed in its training data. 00:26:51.260 |
This is why if you feed a bunch of information about, you know, traffic times or something into 00:26:58.400 |
one of these models, and it gets good at predicting what's going to happen at given times, it has 00:27:03.880 |
approximated maybe some sort of reality about the underlying traffic system. 00:27:08.360 |
There's a lot more cars between 4.30 and 6.30 because that's when traffic lets out from 00:27:13.300 |
It sort of learns these underlying, approximates these underlying systems and processes so it 00:27:17.240 |
can do better at predicting what's going to happen. 00:27:18.840 |
So when it comes to serving content like videos that feature other people to other people, this method 00:27:27.700 |
of curation, I believe, is something that should give us some pause. 00:27:32.200 |
And the reason is, if we think about this historically, humans have always been a little 00:27:36.360 |
uneasy and a little wary about the production of mass content. 00:27:40.000 |
We worry about it because we know content has a real impact. 00:27:44.920 |
People talking to other people and mass content since the beginning of the printing press has 00:27:53.000 |
Just look at like the witch trials that happened all throughout Europe and eventually making its 00:27:58.240 |
A lot of this came out of some, you know, printing that got people thinking about this or that. 00:28:04.180 |
And the reason is, is the human psyche has dark elements. 00:28:08.360 |
We have a hardwired affinity for hatred or violence or dehumanization, an attraction to 00:28:17.200 |
You know, we have a lot of dark parts in our brain and we try to appeal to our better angels, 00:28:23.580 |
So we have all sorts of guardrails we put up. 00:28:26.400 |
If I'm editing a newspaper, if I'm a producer for a television program, if I'm, you know, 00:28:31.240 |
producing a podcast, you know, what I'm going to say or not going to say to accomplish my goals, 00:28:38.700 |
We're careful about it because we know there's a lot of stuff in the human brain, a little more 00:28:47.180 |
We integrate human values into how we curate content. 00:28:51.220 |
And when we don't do that, we get really upset or worried about it. 00:28:55.320 |
I mean, this was like World War II propaganda. 00:28:57.840 |
If not, basically a group saying, throw those guardrails aside. 00:29:04.380 |
And then we look back at like World War II era propaganda or like, ah, this is not great. 00:29:08.020 |
We don't be like, hey, what great communication? 00:29:10.740 |
We don't go there even if it could help our cause. 00:29:16.540 |
These type of recommendation architecture don't share our values because they don't know what 00:29:21.900 |
They're just producing numbers to win a game of getting positive or negative zaps from a machine 00:29:28.020 |
So when we ask a system like this, hey, do a good job of recommending stuff, it's like, great, 00:29:33.580 |
I will do what I'm mathematically supposed to do, which is build these mathematical approximations 00:29:37.580 |
of the underlying systems and processes that help explain the patterns I've seen. 00:29:40.680 |
That means it's going to be building models of the dark impulses. 00:29:45.320 |
It's going to build models about like what the affinities for hatred or dehumanization or 00:29:50.940 |
violence or purience or whatever it is that we're sort of, we're embarrassed to admit 00:30:02.740 |
You get basically not a digital newspaper editor, but a digital propagandist of the worst kind. 00:30:09.180 |
That's what happens when you allow blind mathematical models to start doing content curation. 00:30:15.260 |
And I think that territory, this is what we should worry about. 00:30:21.200 |
We're not replacing one country's values with another. 00:30:24.140 |
There's no knobs to turn about different properties we want or don't want. 00:30:27.300 |
Machine learning-based recommendation algorithms, architectures, again, just to summarize, they 00:30:32.540 |
are just going to, they're going to model the systems at hand blindly to win the game of 00:30:40.180 |
And when it comes to content, that really pushes back against the last 500 years of human experience 00:30:44.220 |
with how we should deal with content production. 00:31:03.340 |
Am I glad that TikTok in America is coming under American control? 00:31:13.720 |
And there's places where a foreign government maybe could mess with these architectures to 00:31:17.220 |
screw with us, especially at that last phase where you do some heuristic tweaking and the 00:31:24.460 |
But will this somehow allow us to fix TikTok in like a fundamental way? 00:31:28.300 |
If we have the right people controlling the algorithm, can we make these platforms behave 00:31:36.440 |
Machine learning algorithms deployed in this context will relentlessly learn how best to 00:31:41.220 |
summarize the human condition and exploit us to get it to do what they, whatever it is, 00:31:47.120 |
This technique is going to exploit our dark sides just as much as our bright because it doesn't 00:31:54.180 |
When it comes to technology in recent decades, I think we have underestimated the degree to 00:31:59.340 |
which we just sort of implicitly integrate our human values and how we operate in many 00:32:06.920 |
And as we start conceding more control of these things to technology, technology that cannot by 00:32:12.200 |
definition share those values, we begin to learn how unsettling things get. 00:32:17.000 |
We don't realize how much we depended on these just humanistic moral rules of thumb, 00:32:22.980 |
these normative standards about what's good and bad. 00:32:25.660 |
And so it's not so innocent to say, let's let an algorithm serve our news. 00:32:30.140 |
Let's let an algorithm serve our entertainment. 00:32:32.380 |
Let's let an algorithm be at the center of the town squares. 00:32:36.180 |
An algorithm is very different than a person and we don't miss what we have in human types 00:32:42.000 |
of moralistic thinking until we take it out of our system. 00:32:45.160 |
So that's my main takeaway about the TikTok situation. 00:32:47.700 |
I'm sure there's lots of national security concerns and this and that privacy concerns. 00:32:52.460 |
But we're far away from solving the problem of social media's dark impulses. 00:32:56.520 |
It is baked into the mathematics of how these things execute. 00:33:00.160 |
It is not an accident or a bad feature someone added late in the process that we can remove. 00:33:07.460 |
So we have a, I want to keep this conversation going. 00:33:11.440 |
We have some questions from listeners that are about this topic. 00:33:14.560 |
And then I have a few recent articles about TikTok. 00:33:17.020 |
So we can kind of see like, well, what, what really is happening with this technology now? 00:33:24.620 |
So stay tuned, but first we need to take a quick break to hear from one of our sponsors. 00:33:33.000 |
You know, I have a theory about financial planning that I want to share here. 00:33:38.220 |
When it comes to financial things like your spending, but also like your 401ks, your stock investments, 00:33:43.820 |
maybe properties you own, you know, the places where you have money or you're spending money, 00:33:49.640 |
The number one issue that matters is often visibility. 00:33:55.320 |
What I mean by that is if you can easily get updated on how all of your financial assets 00:33:59.660 |
and investments are doing, you're going to be motivated to keep going. 00:34:03.240 |
Like you're going to see, oh, I want this number over here to grow. 00:34:08.360 |
Let's invest on, you know, the right, the right track. 00:34:12.320 |
Or if you see you're spending a lot of money clearly here on something that you don't really 00:34:16.240 |
want to be like, oh, let me adjust that behavior right away because I'm getting that 00:34:22.920 |
It also allows you to do things like, oh, I need to change my asset allocations for the 00:34:26.620 |
Like all this stuff that really helps you get the most out of your money. 00:34:29.140 |
The more visibility you have into what's going on, the more you act. 00:34:32.560 |
And the more you act, the better you're going to be. 00:34:36.180 |
Like the big mistake, especially for people at my age who have so much going on in their 00:34:40.520 |
How do you, what's the money you're leaving on the table? 00:34:42.780 |
It's the money you're leaving on the table because you've been too busy to like invest it 00:34:46.280 |
To get it out of your bank account and into this, to reshift out the allocations, to cut 00:34:50.220 |
out this extra spending that you don't really need to do, but you didn't know what was happening. 00:34:53.180 |
And thousands of dollars are going out the door that could have otherwise been put into 00:34:58.980 |
This is why I've never understood when people are wary of like spending a little bit of money 00:35:03.820 |
on a service or advisor that can bring more visibility to their finances. 00:35:06.760 |
That really misses the forest for the trees because having more visibility in what's going 00:35:11.820 |
on in the short term might cost you a little bit, but return massive investments, massive 00:35:18.160 |
So this brings us to today's sponsor, Monarch Money, an all-in-one personal finance tool 00:35:22.740 |
that brings your entire financial life together in one clean interface on your laptop or your 00:35:28.080 |
And I want to say right off the bat, just for our listeners, Monarch is offering 50% off your 00:35:34.140 |
So stay tuned for the promo code that will get you that 50% off your first year. 00:35:39.160 |
It optimizes exactly the visibility issue I just discussed. 00:35:42.340 |
It's a beautiful interface that shows you everything, how everything is doing, how you're 00:35:46.300 |
spending, what's going on, how your different assets classes have been returning. 00:35:53.440 |
Now, whether or not you also work with an advisor or manage your money yourself, this is 00:35:57.380 |
a tool that will help you keep taking the actions you need to take. 00:36:02.480 |
For me, for example, there's an annual retire investment I do because of my company I have. 00:36:10.000 |
Having more visibility into these accounts means I make that investment as soon as I can 00:36:15.220 |
now each year, which means I am getting more return on it. 00:36:18.080 |
Before, I would procrastinate, not because I didn't know it was important to do, but I 00:36:22.760 |
just didn't have a lot of data visibility and what was going on in my finances. 00:36:27.580 |
And that's going to make me so much money in the long term because I'm investing the 00:36:33.600 |
So when you're looking for visibility in your finances, Monarch is the way to go. 00:36:40.100 |
You can link all your accounts together and get going in minutes. 00:36:42.420 |
And two, it's a tool that professionals love. 00:36:45.320 |
The Wall Street Journal named it the best budgeting app of 2025. 00:36:48.640 |
So don't let financial opportunities slip through the cracks. 00:36:52.120 |
Use code deep at monarchmoney.com in your browser for half off your first year. 00:36:57.820 |
That's 50% off your first year at monarchmoney.com when you use the code deep. 00:37:03.620 |
I also want to talk about our friends at reclaim.ai. 00:37:10.660 |
Reclaim is a smart calendar assistant built for people who value time as their most precious 00:37:17.540 |
It automatically defends what they call focus time, but we know what this really is. 00:37:23.660 |
It will auto-schedule meetings, resolve conflicts, and protect your time for habits, tasks, and 00:37:28.800 |
It's a tool for people who care about deep work and worry about meetings making deep work 00:37:34.820 |
Basically, it's custom-built for listeners for this podcast. 00:37:40.820 |
Now, I've been messing around with Reclaim recently, and here's the best way I can summarize 00:37:44.440 |
It's like having an assistant who sits next to you and helps you manage your calendar, and 00:37:50.840 |
the same assistant has a prominent Cal Network tattoo. 00:37:53.980 |
That's basically what you're getting with Reclaim. 00:37:56.860 |
They know the way we think about things, and it's going to help you act on that. 00:38:01.820 |
The average Reclaim user gains seven extra productive hours every week, cuts overtime nearly in half, 00:38:07.240 |
and sees major drops in burnout and context switching. 00:38:15.780 |
You can sign up 100% free with Google or Outlook Calendar if you go to reclaim.ai slash 00:38:23.500 |
Plus, Reclaim is offering my listeners an exclusive 20% off 12 months if you use the discount code 00:38:32.100 |
So visit reclaim.ai slash Cal to sign up for free. 00:38:39.220 |
All right, Jesse, let's get back into our discussion here. 00:38:41.520 |
So first of all, Jesse, how do you rate my artwork on this one? 00:38:46.040 |
I think this was pretty good as my artwork goes. 00:38:53.200 |
With the coloring and, well, it must be because of all the teaching. 00:39:00.820 |
So when you got the ByteDance papers, is that how you got most of that information or did 00:39:06.960 |
So I looked up, I mean, I know about this before. 00:39:12.300 |
So I know that the two-tower system, basically there's ways to do recommendations. 00:39:18.040 |
The original ways that were, you know, came out of the machine learning community, just 00:39:22.140 |
they don't scale when you replace thousands with billions. 00:39:27.000 |
So this sort of two-tower approach is something where you can run it with really large amounts 00:39:33.480 |
The papers get into, the one paper gets into the architecture. 00:39:37.200 |
They call it the IFS, Information Something System, that they use for TikTok and also two 00:39:42.840 |
That really gets into their mixture of the two-tower recommendations and the real-time stuff, 00:39:48.820 |
And it really gets into the architecture of how they do it, like in the nuts and bolts. 00:39:51.860 |
And then there's another system that really gets into the weeds about how they like very 00:39:59.060 |
There's a type of hash table called a cuckoo hash that you learn about in computer science 00:40:04.800 |
And they have all these like interesting ideas they use. 00:40:06.700 |
And that also got in on what they were using here. 00:40:10.200 |
So like almost certainly it's like some mix of two-tower system plus integrating stuff that's 00:40:16.740 |
popular plus just a really, really good distributed system design. 00:40:24.160 |
I mean, I really think the reason why it performs better than other social platforms 00:40:33.800 |
That's like the ideal circumstance for a recommendation system. 00:40:37.500 |
Is it normal for technology companies to publish papers about what they're doing? 00:40:51.760 |
And then at some point, like we don't want to publish papers. 00:40:57.720 |
Anthropic publishes what they want to publish. 00:41:03.000 |
Like they want to be seen as like a real academic company, but they don't get in the weeds about 00:41:06.960 |
But yes, you'll discover this at computer science conferences. 00:41:10.280 |
There's always a non-trivial fraction of papers that come from technology companies. 00:41:15.860 |
Based on their own data and like what they've observed. 00:41:23.260 |
I don't know if it like helps the research scientists. 00:41:25.140 |
They want to publish, you know, it's interesting, but it's definitely true. 00:41:30.980 |
So if you have doctoral students that want to keep on writing papers forever, they can 00:41:34.460 |
go to a private company and potentially still do that. 00:41:36.300 |
Well, it's part of the problem is like the social media companies did this, like Twitter 00:41:40.000 |
So Twitter at first had an interface where if you were a professor, right? 00:41:46.380 |
You could get a bunch of data to do research on. 00:41:48.280 |
Like I want to do research on, you know, tweets and what they tell us about X, Y, and Z, right? 00:41:54.860 |
You could get a license as an academic institution. 00:41:56.920 |
It'd be like, yeah, I want to give me the last 20 million tweets that were on this topic. 00:42:01.720 |
And then you could get them and then, you know, run research on them. 00:42:04.400 |
And then at some point, Twitter turned that off. 00:42:06.700 |
Like, no, no, our researchers will write papers on our data. 00:42:13.220 |
And so partially it allowed them to have really good papers because now Facebook do 00:42:20.080 |
So you had to be at one of these companies to write papers about like their data. 00:42:24.240 |
And that gave them some control over what was published or not published because they get 00:42:29.320 |
So they kind of figured out at some point, like, hey, maybe this is not, we don't want 00:42:32.060 |
people writing about our stuff if we can't control what they're going to publish. 00:42:37.840 |
Or you can go be like a fellow at Facebook or something and use their data, but they get 00:42:43.480 |
give you the thumbs up or thumbs down on anything you publish. 00:42:46.300 |
And there's like a whole period with Facebook. 00:42:48.740 |
They don't do this as much anymore, but there's a whole period where the original papers were 00:42:51.860 |
coming out about the mental health impacts of some of their products, where they would 00:42:58.600 |
You would have to have a Facebook data scientist on your paper to get access to their data. 00:43:03.660 |
And the only papers coming out all had these like positive spins on everything. 00:43:10.420 |
And then eventually like the field was like, we're just going to do these papers without 00:43:13.580 |
And lo and behold, like the outcomes were a lot more negative. 00:43:17.140 |
Well, given it a third party, it's kind of like a New York reporter. 00:43:19.220 |
If they're like following some, you know, somebody around for months and then you don't 00:43:26.580 |
So anyway, it's interesting how that works, but that's what's going on. 00:43:29.220 |
I just, this is a big thing in digital ethics is the role that ethics plays in the way we 00:43:34.700 |
run systems and algorithms don't have ethics. 00:43:37.820 |
They don't have like an obvious way of having ethics inserted. 00:43:42.420 |
I mean, it's not, again, it's not like people are tuning these algorithms to do bad things. 00:43:45.540 |
It's just these algorithms operate blindly recommendations. 00:43:53.800 |
It's like machine learning systems approximate the underlying sort of processes and systems 00:43:58.240 |
that explain the patterns they've given as training. 00:44:01.160 |
They just build mathematical equations that have approximated those systems well enough to 00:44:06.180 |
properly predict, you know, will this person like this thing? 00:44:12.100 |
And so you're, again, you're approximating the dark as easy as you are the good. 00:44:22.460 |
Digital ethics is probably going to be, yeah. 00:44:24.660 |
The fact that you guys have the first program is pretty cool. 00:44:26.980 |
We're training up a lot of students that know CS and know ethics and they're trying to figure 00:44:32.280 |
We've got some questions here that sort of follow the same thread about TikTok and social media 00:44:42.380 |
I truly do not live on the slope of terribleness. 00:44:45.280 |
I use social media for about one hour per year. 00:44:47.880 |
Do you think I have an ethical obligation to stop that usage because it is continuing even 00:44:53.160 |
a little to these companies and maybe helping fuel others' addiction? 00:44:56.280 |
So for those who didn't listen to last week's episode, the slope of terribleness is the model 00:45:01.600 |
I use to explain the impact of social media platforms, especially curated conversation platforms. 00:45:09.600 |
And basically, the idea was it's not like these platforms have a constellation of problems 00:45:19.340 |
And you start with the smaller problems, but the gravity of the slope is going to pull you 00:45:23.860 |
continually to worse and worse problems until at some point, hopefully, you exert enough 00:45:29.680 |
But that also takes up a lot of energy you could be doing and applying to something else. 00:45:33.540 |
So your flourishing goes down and then you waste a lot of energy so it doesn't go down 00:45:36.500 |
further and it doesn't make sense to stop using the platforms. 00:45:41.920 |
Now, Kara, I'm going to say something maybe people in my field might not agree with. 00:45:46.680 |
I'm not a big believer in the ethical sorting approach to thinking about our engagement with 00:45:55.560 |
This idea that picked up steam in the last decade, first on the left and now on the right, 00:46:00.480 |
what we need to do is we have to sort of ethically rate people and organizations based on other 00:46:09.800 |
things they've done and then be very careful about this going to be our expression of our 00:46:16.220 |
ethical values is like, well, I got to be very careful. 00:46:21.900 |
Can I like talk to this person or talk about this person? 00:46:26.940 |
And it's this idea that our sort of performative or visible ethics are based on who we interact 00:46:34.540 |
So we have to constantly be self-policing and trying to assess so that we can be very careful 00:46:38.980 |
that all of our interactions and engagements with the outside world carefully, carefully 00:46:44.560 |
match our sort of like internal ethical ideas. 00:46:46.700 |
I mean, there's a, you know, there's a general principle there that makes sense, but I think 00:46:51.300 |
And I think it ends up, you end up in sort of like capricious and tribal territory when you 00:46:57.420 |
So sure, if like there's a massively important principle that a company is like really violating, 00:47:04.300 |
you know, there's like a company producing rollerblades, but they use the bones of puppies 00:47:10.460 |
You're like, I, I don't want to buy those rollerblades. 00:47:13.940 |
But for the most part, I think it's, it's just not tenable to do this with all things in 00:47:17.620 |
your life and it leads to tribalism and capriciousness. 00:47:21.480 |
My approach, especially when it comes to technology is to have a value centric approach. 00:47:29.820 |
What do we value in terms of like our civic culture? 00:47:31.920 |
And that's what I want to assess things through. 00:47:34.020 |
If this product is against, uh, it reduces the things I value in my life without like 00:47:43.460 |
a commiserate positive upside, then I don't want to use it. 00:47:46.460 |
If this product is not just making my life worse, but really hurting like civic life or 00:47:52.200 |
this or that, then, then I don't think it's a good product. 00:47:54.920 |
And I want to find another thing to use if I don't want to engage with it. 00:47:59.260 |
Like, in other words, is it hurting things I value myself? 00:48:03.280 |
Is it hurting like these values, like values I hold true? 00:48:06.960 |
And if so, I don't want to use it or I want to use it in a way that sidesteps those value 00:48:11.120 |
So this seems more functionalist or pragmatic. 00:48:13.180 |
It's basically the approach of technology I give in my book, digital minimalism. 00:48:18.360 |
I don't need to assess everything this company has ever done. 00:48:22.680 |
I just want to say, does it make my life better or worse? 00:48:24.420 |
And if it's worse, sorry, buddy, I'm not using you. 00:48:27.900 |
Hey, TikTok, you haven't made a compelling case why I have to use you. 00:48:31.980 |
Your ads during the Superbowl about it helped the vets bakery business or whatever, not convincing 00:48:41.800 |
I'm seeing the videos about the puppy bone rollerblading company, and it's kind of making 00:48:48.860 |
That's the way I think you should look at it. 00:48:50.520 |
So if you're using social media like one hour per year, that you have some hyper-focused 00:48:55.660 |
functionalist use, like, hey, this helps my company to do this each year when we have 00:48:59.960 |
some announcement and so whatever, that's fine. 00:49:02.020 |
The key is that's not making your life worse. 00:49:05.060 |
And if we just have that assessment, a lot of companies that we might like ethically value 00:49:14.980 |
They're not going to do too well because deep down, why don't we like them? 00:49:19.940 |
It's because their products are making our lives worse. 00:49:24.080 |
But I'm not a big believer of trying to ethically sort everything. 00:49:26.260 |
I just think it's not a way to run a civil society. 00:49:29.540 |
Use the stuff that makes your life better and be confident to use the stuff that doesn't, 00:49:42.360 |
A lot of people from his generation do the same thing. 00:49:48.700 |
I think staying informed is useful, especially like as you go from your adolescence into adulthood. 00:49:57.980 |
It's like you should know what's going on in your world, right? 00:50:01.840 |
Because that's important as an adult, especially in like a participatory republic like the U.S. 00:50:09.180 |
And now we know why because of what we were just discussing in the deep dive. 00:50:16.180 |
The curation is being done by a blind machine learning algorithm, which is just building approximations of the systems and processes that help explain the patterns it sees. 00:50:24.660 |
And that could have no correlation to things like truth or to civic value or to like inflaming darker impulses. 00:50:36.120 |
It's like the world's most reckless newspaper editor. 00:50:39.520 |
You know, it's like a psychopathic newspaper editor who like really has like no empathy for humankind and doesn't really know what's going on and is drunk all the time and has some weird fetishes. 00:50:50.760 |
It's, you know, it's like the worst possible curation is algorithmic curation. 00:50:59.040 |
I want human curation because all of the things that we value, human values, human ethics, these normative standards and guardrails we've had since the Gutenberg printing press. 00:51:08.720 |
Those are all there one degree or the other, sometimes more rigorous than other times, you know, sometimes more strict than other times, but they are there and unavoidable when it's human curation. 00:51:21.840 |
It could be websites that aggregate news, like people like All Sides, for example, which will be like, here's an issue. 00:51:28.140 |
Here's some publications on the left and on the right and some center and they're all covering it and you get a pretty good side. 00:51:37.100 |
The goal here, because right now I can hear everyone's concern, there's often this sort of tribal concern. 00:51:43.000 |
But what matters is like bias because, you know, my side's right, the other side's wrong. 00:51:47.820 |
And if they do that, they might be hearing from someone. 00:51:49.880 |
I mean, everyone thinks that like their side is very neutral and fact-based and the other side is crazy. 00:51:54.840 |
And so like, it's not just get information from a human, but like the right humans, but a lot of stuff is bias. 00:52:02.920 |
My concern right now is not this person really doesn't like that person. 00:52:06.880 |
You know, it's, you know, Jimmy Kimmel does not like Donald Trump. 00:52:09.560 |
Greg Gutfield does not like Joe Biden, right? 00:52:12.480 |
It's, I'm not trying to get rid of, find neutrality. 00:52:21.300 |
That like, actually there is deep down some sort of culturally attuned, you know, normative ethical frame. 00:52:30.680 |
It could be completely biased or political content, but even that is much better than letting an algorithm make these selections. 00:52:37.040 |
So yeah, teach your son ways to stay informed that it's a human making the ultimate decision in talking. 00:52:45.820 |
Like, look, people have different points of view and different goals, right? 00:52:51.480 |
And I'm a big believer in dialectic content consumption. 00:52:53.560 |
This would be a great thing to teach a kid is like, if they're really into like this type of news source, like, I don't know, it's like a young man who really likes listening to like Joe Rogan. 00:53:10.220 |
You know, here's another person who maybe comes at this a different way. 00:53:15.600 |
And when those things collide, you're like, oh, the world's complicated and interesting and all these sort of great things. 00:53:19.580 |
None of that happens if it's just algorithms. 00:53:21.420 |
So anyways, stay informed, but have human in the loop. 00:53:29.200 |
But again, I'll take a flawed, biased, political hack any day over the combination of transformers and neural networks that goes into the user tower in a TikTok recommendation formula. 00:53:46.460 |
You often discuss the architectural limits of LLMs. 00:53:49.300 |
One way to overcome these limits and those of traditional computer processing is quantum computing. 00:53:54.260 |
Is this a more promising route towards a phase shift in AI and even superintelligence? 00:54:00.100 |
So first of all, why am I placing this question in today's episode? 00:54:03.400 |
Because today's episode is about social media algorithms. 00:54:05.360 |
It's because what I want to do here is zoom out slightly on an application of the same underlying principle. 00:54:11.800 |
So what was the underlying principle that's driving today's conversation? 00:54:15.240 |
It's a principle that's at the core of my work as a digital ethicist, which is you have to understand technologies and how they work before you can start to make decisions about the role of those technologies in your life or in civic society. 00:54:27.180 |
The more you actually understand about the core of a technology's operation, the more effectively you can bring whatever humanistic value matters at bear to figure out what to do about that technology. 00:54:37.740 |
So this is sort of a broader example of that. 00:54:39.980 |
There's a lot going on with AI where the underlying technologies are complicated. 00:54:46.460 |
And when we don't try to understand at least a little bit about what's really going on, we can get pulled left and right into all sorts of different corners. 00:54:52.520 |
So here's a great example because several people, I don't know where people are seeing this, Jesse, but several people in the last week, I'm thinking about one example from yesterday. 00:55:01.220 |
I've been asking me about quantum, quantum and AI. 00:55:06.620 |
Several people in Georgetown or several people just on the street? 00:55:11.280 |
So there must be, I'm sure I'm missing some sort of like big thing that went viral somewhere, but there's this idea out there like, well, quantum computing, that's what's going to unlock super intelligent because, you know, these party poopers like me wrote these articles a couple, you know, a month ago that was unfortunately telling people, Hey guys, these language models are not just going to keep scaling until super intelligence by 2027. 00:55:34.980 |
Like they're not, they've hit a, they've hit a wall and scaling the technology isn't there anymore. 00:55:40.640 |
And a lot of people recognize this and it was kind of a disappointment for those who are betting their whole sort of self-interest and worth and, and, you know, their whole philosophy of life on the idea of a cataclysm, a technology cataclysm, either positive or negative. 00:55:57.200 |
And so they got it back alive by saying like, well, quantum computers can do crazy stuff that regular computers can't. 00:56:01.720 |
So maybe quantum combined with AI will break through these scaling limits and then we'll have something like super intelligence. 00:56:11.920 |
Again, I hate to be the party pooper about this. 00:56:15.480 |
Quantum computers are not, as a lot of people think, just like a super powerful version of a regular computer. 00:56:22.600 |
That you can take anything you could do with a regular computer and now do it in a much more super powerful way. 00:56:27.080 |
If you move it to a quantum computer, it's like going from a 386 chip to a Pentium chip. 00:56:31.560 |
If you appreciate my 1990s Intel references, but that's not how quantum computers work. 00:56:36.960 |
Quantum algorithms are a mix of computer science logic and actual quantum mechanics. 00:56:42.840 |
It's where you very carefully make connections between these quantum bits, right? 00:56:49.860 |
In such a way that when the wave functions collapse, it has the equivalent of searching a large search space to collapse to an answer so that you can extract an answer from a search space all at once in a sort of constant time collapse as opposed to having to do a sort of linear search of the whole search space, which could be computationally infeasible. 00:57:08.960 |
But there's only certain problems that you can set up quantum algorithms to work for. 00:57:13.960 |
So Peter Shore, who was at MIT when I was there, he had this big breakthrough where like, hey, factoring large prime numbers, you could do this way. 00:57:21.740 |
And that matters, like not to get too technical, but RSA public key encryption is sort of the core way that we do key exchange for, say, secure commerce on the internet is based on having really large prime factors that are so large, you can't easily figure out what they are. 00:57:35.160 |
So if you could figure out prime factors quickly, that would be a problem for a lot of cryptography. 00:57:38.720 |
That's something you can do with quantum computers. 00:57:40.520 |
There's also a certain type of search you can do, a different way of searching the search space that's really useful, especially for like simulating a lot of like chemical or physical systems. 00:57:52.240 |
And quantum computers could then do that in a way, do these sort of like physical simulations in a way that it would be hard for a regular computer to do. 00:57:58.960 |
But these are kind of the big use cases right now. 00:58:02.000 |
So no, there's no obvious connection between this narrow things you can do right now with quantum and what we need to do to run like a generative AI model. 00:58:10.000 |
And to make matters worse, it's incredibly complicated and expensive to try to make these quantum computers have enough bits to actually even do those problems on big enough input sizes. 00:58:20.100 |
Because the more quantum bits you have, the more error you produce and the errors add up. 00:58:24.340 |
And we don't know how to get to the many thousands of bits we need to handle like really big problems. 00:58:29.160 |
And there's all sorts of, they're working on it. 00:58:31.580 |
But no, it's not coming anytime soon to suddenly break through the scaling barriers in generative AI. 00:58:38.700 |
Now, yes, there are some like very specific things around Gen AI where you could connect to one of these capabilities, like some sort of like, you know, search through uncertainty with the sample space of things that the model is going to look at. 00:58:50.600 |
Like there's things in theory in some future a quantum machine could help with. 00:58:54.420 |
But no like fundamental, we're going to break through barriers in what, you know, training these things or building these things. 00:58:59.700 |
So no, quantum is not going to give a super intelligence. 00:59:03.180 |
But the reason why I bring that up is when you understand that technology, it gives you a completely different valence on this recent emphasis on this issue. 00:59:12.660 |
And it tells us, right, I think there's a great data point we suddenly learn as, as, as observers of technology and society. 00:59:18.920 |
I think it is taking down the mask of a lot of people who were given a lot of content online in particular about all of these coming AI apocalypses. 00:59:29.300 |
The fact that they switched right away to like, well, quantum is going to do it says, oh, well, that's clearly nonsense. 00:59:36.840 |
So I see all along, you're just working backwards from a psychological need to need some sort of massive technology driven disruption to happen. 00:59:44.600 |
And then you'll just try to fill in the blanks any way you can. 00:59:47.740 |
It's like when you understand quantum and know that that's kind of a crazy thing to say, it reveals a lot about the people who are saying things a year ago that seem much less crazy. 00:59:56.040 |
Like, well, they know a lot about AI and they're saying that, you know, in 2027, we might have humanity extinguished by, by super intelligence. 01:00:02.060 |
We were taking that seriously, but now we can realize with this pivot to quantum that some subset of those people, it really is just a fulfilling a psychological need of, of something disruptive because that will bring some sort of like meaning or focus to their life. 01:00:16.100 |
Understanding technology helps us better understand our cultural interaction with technology. 01:00:29.500 |
So if you're a computer science doctoral student at MIT, you have to have a minor. 01:00:33.460 |
So you have to go take a few classes in something that's not computer science as part of getting your doctorate. 01:00:38.600 |
And I had one friend in particular, but a lot of people did this, I think. 01:00:42.480 |
They're like, oh, I want to do quantum algorithms. 01:00:44.240 |
So I'll take some courses in physics so that I can do, I'm, I'm a computer scientist at MIT. 01:00:49.240 |
I'll learn some quantum physics and then I can do quantum computing. 01:00:53.780 |
Because it turns out, this one actually failed, not failed in the sense that they failed the course, but they failed to make any sort of meaningful, that actually learn enough to do quantum computing work. 01:01:04.740 |
Because it turns out, here's what I learned from them. 01:01:08.280 |
To really work with quantum computers, you need to understand quantum mechanics. 01:01:11.700 |
Can't just go take a course on quantum mechanics. 01:01:14.500 |
You got to like learn all of physics and related math and end up at quantum mechanics. 01:01:20.180 |
You basically have to, it's a multi-year, like this has to be what you're studying. 01:01:24.700 |
It's like, okay, as I get towards like year four of my doctoral program, I finally know enough stuff, enough math and enough traditional physics to start doing the quantum mechanics. 01:01:36.440 |
Kind of similar to what you were talking about, the theme with Brian Keating. 01:01:43.720 |
Jesse's referencing, it would have been a few days ago when they hear this, right? 01:01:47.300 |
My episode with Brian Keating, who knows a lot about this type of math. 01:01:50.500 |
But as you'll see, college and multiple postdocs before, it took him a long time to really learn all this stuff. 01:01:57.400 |
So anyways, as an aside, quantum computing is really hard and really cool and it's going to do some cool stuff, but it is not a miracle machine. 01:02:02.860 |
Don't listen to anyone who says, because I bet you the same people are saying quantum is going to give us super intelligence. 01:02:07.820 |
We're saying LLM scaling is going to give us super intelligence. 01:02:10.260 |
And I would bet you go back four years and they'd be talking about blockchain based internet was going to be the future as well. 01:02:17.320 |
It just, there is something exciting about massive disruption, but it's hard. 01:02:23.800 |
Most stuff is more complicated than it seems. 01:02:28.820 |
So again, we're deviating a little bit from the theme, but not really. 01:02:32.140 |
So what I like to do with these case studies is talk about people who have used the type of advice we talk about on this show successfully in their own life, right? 01:02:40.800 |
So what is the relevant advice on this show that would match what we're talking about today, which is about how these algorithms work and the dangers of algorithmic curation? 01:02:50.520 |
Well, one of the big ideas I talk about is that an effective way to not have your life be captured by the digital, especially by the algorithmically curated world on your phone is to make your life outside of your phone so deep that the stuff on your phone becomes a lot less interesting. 01:03:07.780 |
If your life is weird and shallow and chaotic and uncertain, you're like, I might as well numb or distract myself with the phone. 01:03:14.360 |
But when you intentionally construct a life outside on your own terms that's deep, then suddenly becomes a lot easier to step away from things like the TikTok curation algorithm. 01:03:24.700 |
So that's the case study I'm going to give you today is one about, as many of my case studies are, someone who figured out through trial and error how to actually start engineering a deep life. 01:03:36.580 |
It's not specifically about technology, but it's also all about technology because when your life is deeper, you have much more say over what you let into your life and not. 01:04:03.580 |
Kieran says, a friend of mine introduced me to your work in 2021. 01:04:07.300 |
And ever since then, I've been a voracious reader of your writings and an evangelist of all things Newportonian. 01:04:15.960 |
And within a few years of getting my master's degree in 1997, I had a pretty cool, good double career going in classical music composition and computer consulting. 01:04:26.040 |
So I learned most, I leaned mostly into computer work, taking semi-regular commissions to keep my artistic soul happy. 01:04:33.580 |
I just stumbled unintentionally through an enjoyable and lucrative 15-year stretch of contract-based self-employment. 01:04:38.520 |
In 2012, just a few years after my wife and I bought a house and had our first child, I decided I was done with the dual career thing. 01:04:48.040 |
Finally deciding to follow my passion, I set out to become the great composer, I should say Kieran capitalized great and composer here, that I believed I was destined to be. 01:04:59.540 |
I told all my computer clients I was closing up shop, wound down all those projects, and cast my net for bigger and bigger music commissions. 01:05:06.880 |
Long story short, that led to a steep decline in income, and as the primary breadwinner with a family and a big Toronto mortgage to pay for, I fell into a rapid downward spiral that left me completely burnt out. 01:05:17.880 |
It took me a full year to recover health-wise and another two years to rebuild my networks and regain the trust I had broken in every facet of my professional life. 01:05:25.960 |
I then vowed to work intentionally towards being happy and living the lifestyle I wanted, rather than focusing on any particular aspect of my career. 01:05:32.620 |
Today, I have a truly fantastic groove going. 01:05:35.660 |
Most days, I punch out at 3 p.m., whether in computers or music. 01:05:38.620 |
I spend the vast majority of my time in deep mode, and I'm producing the highest quality work of my life. 01:05:43.740 |
I collaborate almost exclusively with people who let me work at my own pace. 01:05:47.440 |
My income is the highest it's ever been, and I have a lot of quality time and energy left over to spend with my family. 01:05:54.040 |
It's a great example of some core ideas of lifestyle-centric planning, which is at the core of my theories about how to cultivate a deep life. 01:06:01.380 |
So what Kieran did first is what most people think you need to do to build a deep life, which is have some sort of radical change. 01:06:08.620 |
The way we think about it is the magnitude of the radicalness of my change directly corresponds to the magnitude of how much better my life will be. 01:06:15.940 |
So if I want my life to be five times better, I need a five times more ambitious radical change I'm going to make. 01:06:26.020 |
Because what matters in your day-to-day satisfaction is not any one factor. 01:06:30.420 |
It's many, many different factors that affect your day-to-day subjective well-being. 01:06:34.040 |
I call those many, many factors your lifestyle. 01:06:36.000 |
So if you go for one radical change, often what you're doing is a combination of getting the short-term fix of just doing something radical feels good in the moment, which is true. 01:06:47.760 |
And then you're taking at best like one aspect of your life that's important to you and making it bigger, but you're ignoring all the other aspects of your lifestyle. 01:06:57.260 |
So either those stay the same, or what's more likely is you make this one aspect of your lifestyle better, but as a result, these other aspects get worse. 01:07:04.780 |
And when you add it all up, you're worse than when you began. 01:07:07.160 |
And that's what happened to Kieran is like, I like this inspiration of writing. 01:07:11.040 |
I get a rush thinking about the idea of being a full-time composer and I like doing creative work and pushing myself. 01:07:17.520 |
So let me take that aspect of my lifestyle and put all my chips on the table there. 01:07:20.500 |
The problem is it made these other aspects, like your financial security, time with your family, you know, these other aspects, your stress, your health, they all got worse. 01:07:32.840 |
So one radical change very rarely makes all the difference. 01:07:36.060 |
So what he did instead was essentially lifestyle centric planning, where you look at your whole lifestyle and you systematically make changes. 01:07:43.740 |
You assess them, how they're going to affect all aspects of your lifestyle. 01:07:46.680 |
And you're looking to move everything up, but at the very least, not bring anything down. 01:07:50.460 |
And when he looked at his whole lifestyle, he's like, oh, if I keep my computer consulting, I can still do music. 01:07:57.060 |
And if I get better at my computer consulting, instead of taking on more work, I can raise my rates. 01:08:02.700 |
And now I can make more money and not have to spend, I can spend less time. 01:08:05.160 |
He's done by three and I can bump up this need for creative, uh, pursuits and improving my craft. 01:08:14.100 |
Like I can build up my ambition for the composition I'm doing. 01:08:17.620 |
Now I'm not bumping that up as much as just going full time to being a composer, but I can still improve that while keeping these other things higher, better as well. 01:08:24.300 |
And in the end, he had a much, much better lifestyle. 01:08:28.380 |
His life became deep because he cared about all aspects of his lifestyle, not just one. 01:08:32.220 |
So again, what does that have to do with tech talk? 01:08:35.400 |
Cause once you take control of your life and make your life deeper, what's going to happen is the attractions of the diversions, the digital, the attention merchants wears is not going to be as impressive for you anymore. 01:08:48.140 |
And it's going to be much easier to make these sort of, uh, intentional, this are intentional decisions. 01:08:55.900 |
We often try to do calls when we can as well. 01:09:04.220 |
We met last year, actually, when I brought you in for a recording on my own podcast, Two Dads, One Car. 01:09:09.240 |
And thank you so much for that wonderful conversation. 01:09:11.920 |
Since our chat, many of my recent guests have talked about how their kids need us even more as they get older and head into the high school years. 01:09:20.460 |
And I think you shared that as well, um, have you adopted any new practices recently to accommodate this extra pull on your time from your kids on top of your role at Georgetown? 01:09:42.420 |
It's like, remember comedians in cars getting coffee? 01:09:48.380 |
Uh, we drove around Tacoma Park and he sets up cameras in the car and they kind of interviewed. 01:09:53.900 |
He came down, I think from Canada or something. 01:09:58.620 |
It's somewhere, I don't know, on YouTube or something like that. 01:10:02.100 |
I have a few thoughts about it, including thoughts that connect this back directly to the technology themes we're talking about, as well as some broader thoughts as well. 01:10:10.760 |
Uh, one of the things I'm, here's one of the changes I'm making to account for my boys are all in the age now where like dad time really matters. 01:10:17.880 |
They're not really young and they're not out of the house yet. 01:10:26.120 |
And what I mean by that, that sounds trite, but there's actually something deep there. 01:10:30.420 |
That is the priority over other things I might be interested in doing. 01:10:36.760 |
If you have multiple kids where the priority, you know, you have work, you have to do the, you know, provide for the family, but you keep that reasonable. 01:10:52.960 |
I am, I am sore as we record this because me and the other coach of one of my kids, little league teams did a two and a half hour practice. 01:10:59.960 |
And when you're at my age, two and a half hours is a long time to be like hitting ground balls and catching balls and throwing balls. 01:11:14.960 |
Um, a lot of time with my kids doing things with my kids because you know, that has to be, that is my hobby. 01:11:21.760 |
And that's what I mean by my hobby is like, that's actually my priority when it comes to spending time and doing things outside of work. 01:11:26.560 |
And then that'll change again as you get older and your kids move on. 01:11:28.560 |
You got to fill that in with other things or, you know, things get sad before that point. 01:11:32.760 |
You have to have other things going on as well. 01:11:34.360 |
But, uh, that's the main thing I'm doing now is like, that's priority one outside of the fundamentals of just making sure the lights are on. 01:11:40.480 |
And I'm not having repeated heart attacks, but I think that matters. 01:11:46.480 |
Well, I think it's really important if we're talking about like kids coming up to high school age, do not let them be exposed to algorithmically curated values. 01:11:56.080 |
Do not let them be on, uh, with unrestricted access to the internet. 01:12:05.280 |
They're at an age where they need these sort of human values that those algorithms do not have. 01:12:09.600 |
They need repeated examples of those values applied. 01:12:12.400 |
And the only way to do that is to, uh, to, to be there with them, doing things with them. 01:12:17.440 |
Let me tell you about this thing we just did and what I noticed there. 01:12:27.720 |
This book is not, what are you getting out of this? 01:12:29.200 |
All of those types of discussions when you're doing things involved in your kids' lives are your chance for helping them learn and understand the normative principles by which you live. 01:12:39.480 |
They do not get those values from their phone. 01:12:41.880 |
If you give them a phone, that becomes their main source of interaction with the world. 01:12:45.840 |
You have put them into a, an upside down world in which it's all, um, there is no humanistic core. 01:12:59.080 |
We know all about, we talked about the slope of terribleness leads you to all sorts of weird or bad places. 01:13:06.280 |
I want to help be their filter and exposure to their world. 01:13:10.280 |
They're the interpreter of what's going on in the world. 01:13:17.160 |
I want to be that filter, not a two tier, two tower recommendation system running within the Oracle's cloud infrastructure for tick tock. 01:13:27.000 |
There's no values that I want my kid to have. 01:13:30.520 |
You should be the curation algorithm in your kids lives, at least in, you know, especially to a middle school and like upper elementary school, not technology, not these algorithms, right? 01:13:42.600 |
It is imprinting in their mind, what values, what matters and what doesn't. 01:13:45.960 |
And the things that is imprinting is almost certainly not going to be the things you want. 01:14:00.360 |
We've got some articles from like the last week about tech talk. 01:14:03.080 |
So we can better understand like what actually is happening. 01:14:08.040 |
It's not going to be pretty, but before we get there, let me take another quick break to talk about a sponsor. 01:14:12.360 |
The holidays are approaching Halloween, Thanksgiving, Hanukkah, Christmas, all of them. 01:14:17.720 |
You can get what you need to personalize your home for these holidays with Wayfair. 01:14:22.600 |
We're talking, you know, decorations and inflatables, but also like stuff inside. 01:14:26.120 |
Like accent pillows or like maybe you're hosting some sort of big dinner for these various holidays and 01:14:32.040 |
you can make your holiday events a breeze with quality cookware that will wow any guests, et cetera. 01:14:36.680 |
As long time listeners know, my favorite Halloween to decorate for is Halloween and Wayfair has been one of my go-to sources for the props and decorations and interior accents I need to make my grand schemes for Halloween haunts work. 01:14:53.240 |
I always look each year like, Hey, what is, what are the new props that Wayfair has? 01:14:59.080 |
Jesse, they introduced this year, a new prop that man, I could have used last year. 01:15:03.240 |
So, you know, I did a UFO themed haunt last year and one of the hard things we had was actually trying to build a good, like the crash UFO at the beginning scene of our narrative scenes. 01:15:12.840 |
Well, Wayfair, I just discovered they have a new UFO themed prop. 01:15:20.680 |
So it's nine feet tall and the UFO is up top. 01:15:24.360 |
And then there's like a abduction, you know, the, the light coming down from it. 01:15:28.920 |
And then at the bottom, there's like someone down the bottom that's about to be pulled up into the UFO. 01:15:35.400 |
So, you know, again, Wayfair knows what they're doing. 01:15:38.040 |
Wayfair is huge selection of home items and every style makes it easy to find exactly what's right for you. 01:15:44.200 |
Like whatever budget you're looking for, whether it's like, hey, I just want like a, um, something on the lower end for use, like in a, like a kid's party or like a higher end thing. 01:15:53.560 |
That's going to be like an impressive piece of furniture that you'll have in your house for, for, you know, years to come. 01:15:57.880 |
They have what you need at whatever budget point you're looking for. 01:16:01.080 |
And with free and easy delivery, even for big stuff like tables or sofas, there's no limit to what you can get from Wayfair. 01:16:07.880 |
So get organized, refreshed and ready for the holidays for way less. 01:16:11.480 |
Head to wayfair.com right now to shop all things home. 01:16:15.560 |
That's W-A-Y-F-A-I-R.com, Wayfair, every style, every home. 01:16:21.560 |
I also want to talk about an app that I think is really smart. 01:16:26.600 |
Headway is a daily app that keeps your brain sharp with key ideas from the world's best nonfiction. 01:16:34.120 |
It has 1,800 summaries across more than 30 nonfiction categories from productivity and happiness to money and beyond. 01:16:42.200 |
And with Headway, each session you do takes 15 minutes, making it easy to stay mentally active, whether you're commuting or taking a break, or it's like you're switching from one task over to another. 01:16:52.200 |
So it's made for people who want to keep learning and evolving, even on busy days. 01:16:58.280 |
You can think of Headway as your personal growth guide, right? 01:17:02.280 |
A way to take in useful information in an otherwise busy life, give you personalized recommendations that can expose you to new ideas that make a difference, 01:17:11.880 |
So it's like very easy to use is the light to use this app. 01:17:15.640 |
Headway has over 50 million downloads and 2 million monthly users that we kind of know it works. 01:17:22.520 |
It's ranked number one in education on the US app store and frequently featured as the app of the day by both app and Google. 01:17:31.080 |
So a lot of people have already discovered Headway's power to help you grow, use some of the world's best ideas, in just 15 minutes at a time. 01:17:41.000 |
So go to makeheadway.com/deep to find out more and use the promo code DEEP. 01:17:49.240 |
That's makeheadway.com/deep and use the promo code DEEP. 01:17:56.680 |
All right, Jesse, let's move on to our final segment. 01:18:02.440 |
I got three articles here all about TikTok that I haven't really looked at. 01:18:06.120 |
I'm assuming, Jesse, these are all going to be uplifting and inspiring, maybe a little bit intellectually 01:18:10.360 |
sophisticated, but will come away saying kudos to plaudits to the American ingenuity. 01:18:16.680 |
It's going to make us just feel really good about culture. 01:18:20.280 |
It's going to be something like TikTok helps school children understand the beauty of Jane Austen, 01:18:26.840 |
leading to impromptu presentation of Emma that wows critics. 01:18:31.560 |
Like this is the type of thing I think we're going to find. 01:18:36.280 |
Now I'll put on the screen for those who are watching instead of listening. 01:18:42.440 |
Canadian privacy officials say, oh God, we're not off to a good start here. 01:18:47.400 |
TikTok's efforts to stop children using the app and protect their personal data have been inadequate. 01:18:53.880 |
The Canadian investigation has found hundreds of thousands of children in the country use TikTok each 01:18:57.560 |
year, despite the firm saying it's not intended for people under the age of 13. 01:19:04.280 |
So we have this like algorithmic value-free curation and we're showing it to kids under 13. 01:19:18.040 |
You know what can prevent them from going on? 01:19:20.520 |
Not letting an 11 year old have access to the internet. 01:19:29.000 |
I thought this one was going to make us feel better about the human condition. 01:19:32.360 |
Why people on TikTok think the rapture is coming and the world is ending this week. 01:19:38.200 |
Oh man, this article is from yesterday, Jesse. 01:19:47.560 |
That is according to the worshipers on TikTok. 01:19:49.800 |
People are convinced that the end of days will take place between today and tomorrow. 01:19:55.240 |
So if you're listening to this, phew, crisis averted. 01:20:02.440 |
The phenomenon stems from the evangelical Christian belief that Jesus Christ will return to earth 01:20:06.440 |
and ascend to heaven with only his true believers. 01:20:10.280 |
So it's a South African pastor claimed on a YouTube video, you know, that Jesus told 01:20:16.840 |
him that this is when the rapture was coming, but TikTok super amplified this. 01:20:21.560 |
It's a corner of TikTok called rapture talk where users are posting videos of themselves, 01:20:26.520 |
either discussing or preparing for the end of the world. 01:20:28.760 |
Christians on TikTok staunchly believe that chaos will erupt on earth when they're swept 01:20:33.880 |
Leaving earth, according to Christians on TikTok, is a glorious occasion and a great honor. 01:20:38.040 |
Don't say Christians on TikTok, say it's the people on rapture talk. 01:20:43.080 |
But this is a bigger point about what I was talking about. 01:20:46.440 |
Why is there no like reasonable human curated news source would give a ton of print to just 01:20:54.680 |
like this random clip from a random church and be like, I don't know, some guy at some random 01:20:59.640 |
church in South Africa said this, like people say crazy stuff all the time. 01:21:02.440 |
We better put this on the front page of our newspaper and put it on the news and be on the 01:21:11.240 |
The algorithm figures out like, you know, this type of stuff does well with a lot of people. 01:21:20.280 |
And once it's in the trending thing, it's going to show it to a lot of other people who 01:21:23.080 |
might not have previously have shown they're interested in rapture talk stuff. 01:21:29.320 |
It is crazy that this like particular little clip has gone wide and it never would in a 01:21:35.960 |
human curated concept because how many people every day think, you know, have some prediction 01:21:40.600 |
for when the earth is going to end, but this is what happens. 01:21:45.480 |
We have one chance here, Jesse, to be uplifted by the possibilities of TikTok. 01:21:51.880 |
Teens facing criminal charges after friend dies during TikTok surfing stunt. 01:21:58.600 |
One of the teenagers was standing on top of a table tied to a car when the crash took place. 01:22:08.520 |
This is not making me feel good about TikTok's impact on the world. 01:22:12.200 |
Uh, let's, let's hear a little bit about this. 01:22:15.560 |
The incident took place in March when someone was driving at 35 miles per hour with her friend 01:22:21.800 |
standing on the trunk and rear windshield of her car. 01:22:26.840 |
When you fall off things at 35 miles per hour on pavement, you can hit your head. 01:22:30.840 |
And then this has led to other, looks like this has led to other incidents of this. 01:22:35.560 |
Because the algorithm used to do recommendations, you know, is capturing some sort of like 01:22:43.000 |
fundamental human tendency to like, we like stunts that are outrageous and attention catching 01:22:49.800 |
And it catches our attention in a way that they get good watch time. 01:22:52.840 |
And so it promoted it in a way that like, you would never see this. 01:22:59.320 |
You would never see this again on a television show in a newspaper. 01:23:04.920 |
You wouldn't see people dedicating their podcasts, like week after week. 01:23:09.320 |
Look at this person, this kid standing on cars. 01:23:12.840 |
All of this is, I think, emphasizing the point that we got to in the beginning 01:23:18.200 |
when you're willing to sort of concede or give over these issues of curation of information 01:23:24.360 |
to a mindless algorithm that really has no values in place, it leads to dark places. 01:23:29.560 |
It leads to people dying or weird news obsessing people or captures kids because, you know, 01:23:38.120 |
This thing is completely hacking their brainstem. 01:23:41.240 |
Like, where do we end this, this look at the algorithms today? 01:23:44.600 |
Algorithmic curation, and I got at this last week as well, is not a good innovation. 01:23:50.760 |
It really is only good if you have like a large stock position in one of these companies. 01:23:57.400 |
I promise you as a user of these tools, there are other things out there that can fill your 01:24:02.520 |
life with interest and entertainment and funniness and new ideas. 01:24:05.880 |
You do not, there's tons of sources of that that are probably gonna be better than watching kids 01:24:10.920 |
standing on tables pulled by cars, leading them to fall and have, you know, life ending brain 01:24:16.200 |
injuries. There's better ways to be entertained. 01:24:17.800 |
So there's really no reason for these things to have such a big cultural position other than the 01:24:22.680 |
fact that like they're very good at it and a small amount of people make a lot of money. So there we go. 01:24:30.200 |
Frankenstein construction in which people put the wrong values into it, which we can fix if the right 01:24:36.200 |
people run it. Algorithms by definition, when it comes to recommendation systems, do not share our 01:24:42.040 |
values. They do not know what values are and to let them curate information on any sort of mass scale 01:24:46.600 |
is a recipe for disaster. That is the solution to all these issues, not putting the right people in charge 01:24:52.120 |
of algorithms, but allowing real people to say, I don't want algorithms making these decisions for me. 01:24:57.080 |
Anyways, that's all the time we have for today. Thank you for listening. We'll be back next week 01:25:01.080 |
with another episode. And until then, as always, stay deep. Hey, if you liked today's discussion 01:25:07.800 |
about the TikTok algorithm, you got to listen to last week's episode, episode 371 called, 01:25:12.440 |
"Is it finally time to leave social media?" where I get in more detail about how these platforms make 01:25:17.480 |
our lives worse. It's a great companion to today's episode. So check it out. Two days after the 01:25:22.600 |
assassination of Charlie Kirk, I sent a short essay to my newsletter. I wrote it in about 15 minutes, 01:25:29.000 |
but it was about as raw or as pointed as I ever get.