Back to Index

Cal Newport On Why Social Media Can’t Be Tamed | Deep Questions Podcast


Chapters

0:0 Cal's intro
0:56 Cal explains the 2 things about the algorithm
2:40 Cal talks about neural networks
3:30 Can we get rid of that in social media?
5:30 Explosive virality
8:16 Section 230

Transcript

All right, Jesse, enough about that. - Actually, we did get a email from Carl. He's a big fan. - Was it about air conditioner? - It wasn't, but it should be. - Well, then I'm not interested. - But he actually, he's been emailing me and he was interested in your talk about the algorithm when you were talking about the Obama call for more regulation for-- - During one of our CalReacts to the news.

- Yeah, a couple of weeks ago. So he thought it was enlightening and he's like talking about the algorithm and how it's like one of the worst things about social media and he'd like to kind of hear you talk about that more. And he's even wondering why an algorithm is even needed.

- Yes. - So if you could just dive into that a little bit. - Well, that's interesting. I mean, there's two things about the algorithm. One is it's not an algorithm, right? So if we're gonna get a little bit more precise with our terminology, I'm teaching algorithms, for example, this semester.

And an algorithm technically is you have a finite sequence of unambiguous steps that are executed in a clear control order. It's way more ambiguous what goes on with social media. So instead of really an algorithm, think about it as a large bank of black boxes that takes in lots of information and then using that information helps make decisions about what should we show this particular user.

It's actually assigning weights to different possible things to figure out which tweet to show this user, which Facebook newsfeed posts to prioritize. And what is actually happening to these black boxes is not just a single algorithm that you can study. It's really for the most part, a collection of mainly neural nets.

So you have neural networks that are trained through back propagation to try to essentially learn. What it is that appeals to you and what doesn't, but it's not just one neural net. There's multiple different neural nets that do different things. And then their feedback is fed into each other in somewhat complicated ways.

And so it's a real, almost intractable mess of black boxes connected together in complex manner. So it's this real complicated mess of different networks hooked up to each other that in the end spits out its recommendations in the terms of weight. So show this tweet, not that. So it's completely obfuscated.

It's very difficult to try to find out what is going on inside those networks. I mean, if you wanna really get into neural networks, I mean, what they're really doing is they're really, as we've talked about before, but they're essentially building spaces, like regions and multidimensional space that they can associate.

So this is what a standard, I don't wanna get too far down this, but a standard not many layer neural net is basically just building up the space in which the inputs exist into these different regions so it can categorize things. And when you get something like a deep learning neural net, you have multiple layers that are each doing something like that, and then they're communicating to each other.

So one layer might look at a picture and just be really good at figuring out where the straight edges are and how many there are. And then another layer takes that input and is really good at figuring out, well, if there's this many straight edges, then it's probably a crosswalk.

And so it all gets kind of complicated, but that's what I wanna emphasize is complicated. So it's not an algorithm we can tweak easily, it's not an algorithm we can understand easily. Can we get rid of that in social media? I mean, we can imagine that being true. We can imagine that being true because we used to have social media without these quote unquote algorithms.

This was basically all social media before 2009. So if you use Twitter in 2007, there's no complicated algorithm involved in showing what you see. All it did was sort of all the people you follow, here are their tweets, sort them in chronological order, put the newest one at the top.

Facebook was doing the same thing. You have different people that you, I don't know what their terminology was, you like, you friend. See, people don't even talk about this on Facebook anymore. It's just become this newsfeed distraction machine, but you would friend people and would look at what they were posting and put it in chronological order, is what people were posting.

And then at some point after that, as we've talked about before, you get these complex neural network based algorithms that say, well, I'm not just gonna show you things in chronological order, I'm gonna prioritize things that are gonna increase engagement, which means time on service. And that's where we get this complicated play where we no longer understand how it shows us what it shows us.

Many of the impacts that the switch to this quote unquote algorithmic sorting of social media content had, like many of the issues of this were unexpected. There's side effects. Like all you're trying to do with these networks, these neural nets all connected together is just to make the user happier in the sense of I wanna spend more time looking at this.

But it had all sorts of side effects. I think one of the big ones is what we talked about when we covered John Heights Atlantic article from a few weeks ago. And it was an introduced intense virality, right? Because now I could post something, other people could start spreading it, the algorithm will see that it's popular.

So it's gonna start showing it to more people, which gives it a chance to be even more popular. And you can have explosive virality. And Heights point was explosive virality led to a environment that was high stakes and terrifying. You could be a hero or canceled in 12 hours, just like boom, it could just happen.

And that completely changed who you social media, how they used it. Another issue with these types of algorithms had to do with, I talked about this in digital minimalism, but it made it way less predictable about what reaction you were gonna get to your pieces. And that also touched on just social psychology, the intermittent reinforcement of sometimes people like what I do and sometimes people don't, that became a real addictive factor in getting people back, especially for Instagram and Facebook in the earlier days of these algorithms is now that when people could like and things could be shared, you had this much more unpredictable and unstable environment of feedback.

And that was very addictive. Like, are people gonna like this? Are people not gonna like this? So it's possible, but there's no reason why companies would because Carl, it makes them way more profitable. It's fantastically more profitable. People use these services all the time because they are perfectly optimized to get you to do that.

If you went back to 2007 Facebook, it is way more boring than 2022 Facebook. If you go back to Twitter, early Twitter, before they began building these algorithmically juiced timelines, it was a way more boring place. Like most of what you saw was not interesting 'cause most of what most people you follow post is not interesting.

So it was good for us as a culture because it was a diversion. We didn't wanna spend too much time on, but it's a completely different beast. And I believe the timeline was Facebook made that move. I mean, Twitter made that move first and then Facebook and Instagram followed.

So when they saw, when Facebook saw Twitter move away from a strict chronological timeline, Facebook realized they had to do something similar. So now I don't know if that genie is gonna go back in the bottle for those companies because I mean, it would be like going to an oil company, like going to Exxon and saying, "I know you've invented these technologies which allows you to get 10X more oil out of the ground per day, but could you stop using them?" And they're probably gonna say no.

They're probably gonna say, "No, this is our main technology for getting oil out of the ground. We've gotten really good at it. We don't wanna artificially go back to a time when we were worse at it." Yeah, so it's interesting. You know, I got another email about that segment.

So that particular article, reacting to Obama's call to regulate social media. And I mentioned section 230. So there's this rule on the books called section 230 of this larger legislation that has a big impact in that it protects, as we talked about, it protects platforms from liability for what other users are posting on the platforms.

And it was invented, we talked about this before, the intent was thinking about comments. You know, I have a blog, it's 2008, of a comment section. And maybe someone's gonna come along and leave a comment on that section that is, you know, violate some laws like revealing insider trading or conducting libel against someone or something like this, right?

And section 230, very crudely speaking, would say, "I'm not responsible for that. I just had a comment board, but I'm not the editor selecting information." And the idea was these large social media platforms were using that coverage to say, "Look, we're not an editor, right? So we can't be held liable for what people actually say on it." And so there's one of the pushes for regulation is to get rid of that protection and say, "No, you are liable for what's posted on your platform, just like a newspaper is liable for what they print and that this might lead to more aggressive or more effective content moderation." I said, "I might be interested in it just because anything that would lead social media to fragment and have to become more niche, I thought would be good." Anyways, I got an email from a lawyer who used to specialize in section 230.

Long story short, Jesse, he was basically saying, "It's a tricky path to go down." He's like, "If you pull up 230 and you do so because there's an impact you want on these very large social media companies," he said, "Actually, they would probably be okay. They can afford the legal expertise to try to walk around and sidestep these liabilities and that small people are gonna get hurt.

It's gonna be calnewport.com. It really could be a problem for smaller companies." So look, I'm not a lawyer, but it's an interesting note. And he's like, "It's just complicated." So there you go. Good question though, Carl. Yeah, I like people emailing. Jesse is jesse@calnewport.com. He loves to hear about the show, what you like, what you don't like.

J-E-S-S-E. J-E-S-S-E@calnewport.com. Somebody asked about that, that's the only reason why I threw that in there. Yeah, yeah. It's not G-E-S-S-E-Y. (both laughing) It's all right. It's a lot of Jesse spelled with I-E. Hmm, fair enough, fair enough. (upbeat music) (upbeat music)