Back to Index

Jonathan Haidt: The Case Against Social Media | Lex Fridman Podcast #291


Chapters

0:0 Introduction
1:25 Social media and mental health
15:14 Mark Zuckerberg
24:51 Children's use of social media
35:36 Social media and democracy
51:42 Elon Musk and Twitter
68:13 Anonymity on social media
74:12 Misinformation
81:6 Social media benefits
83:50 Political division on social media
90:22 Future of social media
96:14 Advice for young people

Transcript

The following is a conversation with Jonathan Haidt, social psychologist at NYU and critic of the negative effects of social media on the human mind and human civilization. He gives a respectful but hard hitting response to my conversation with Mark Zuckerberg. And together, him and I try to figure out how we can do better, how we can lessen the amount of depression and division in the world.

He has brilliantly discussed these topics in his writing, including in his book, "The Coddling of the American Mind" and in his recent long article in "The Atlantic" titled, "Why the Past 10 Years of American Life Have Been Uniquely Stupid." When Teddy Roosevelt said in his famous speech that it is not the critic who counts, he has not yet read the brilliant writing of Jonathan Haidt.

I disagree with John on some of the details of his analysis and ideas, but both his criticism and our disagreement is essential if we are to build better and better technologies that connect us. Social media has both the power to destroy our society and to help it flourish. It's up to us to figure out how we take the lighter path.

This is the Lex Friedman Podcast. To support it, please check out our sponsors in the description. And now, dear friends, here's Jonathan Haidt. - So you have been thinking about the human mind for quite a long time. You wrote "The Happiness Hypothesis," "The Righteous Mind," "The Coddling of the American Mind," and today you're thinking, you're writing a lot about social media and about democracy.

So perhaps if it's okay, let's go through the thread that connects all of that work. How do we get from the very beginning to today with the good, the bad, and the ugly of social media? So I'm a social psychologist, which means I study how we think about other people and how people affect our thinking.

And in graduate school at the University of Pennsylvania, I picked the topic of moral psychology, and I studied how morality varied across countries. I studied in Brazil and India. And in the '90s, I began, this was like I got my PhD in 1992. And in that decade was really when the American culture war kind of really began to blow up.

And I began to notice that left and right in this country were becoming like separate countries. And you could use the tools of cultural psychology to study this split, this moral battle between left and right. So I started doing that. And I began growing alarmed in the early 2000s about how bad polarization was getting.

And I began studying the causes of polarization, bringing moral psychology to bear on our political problems. And I was originally gonna write a book to basically help the Democrats stop screwing up, because I could see that some of my research showed people on the right understand people on the left, they know what they think.

You can't grow up in America without knowing what progressives think. But here I grew up generally on the left, and I had no idea what conservatives thought until I went and sought it out and started reading conservative things like National Review. So originally I wanted to actually help the Democrats to understand moral psychology so they could stop losing to George W.

Bush. And I got a contract to write "The Righteous Mind." And once I started writing it, I committed to understanding conservatives by reading the best writings, not the worst. And I discovered, you know what? You don't understand anything until you look from multiple perspectives. And I discovered there are a lot of great social science ideas in the conservative intellectual tradition.

And I also began to see, you know what? America's actually in real trouble. And this is like 2008, 2009. Things are really, we're coming apart here. So I began to really focus my research on helping left and right understand each other and helping our democratic institutions to work better.

Okay, so all this is before I had any interest in social media. I was on Twitter, I guess like 2009, and not much, didn't think about it much. And then, so I'm going along as a social psychologist studying this. And then everything seems to kind of blow up in 2014, 2015 at universities.

And that's when Greg Lukianoff came to me in May of 2014 and said, "John, weird stuff is happening. Students are freaking out about a speaker coming to campus that they don't have to go see. And they're saying it's dangerous, it's violence. Like what is going on?" And so anyway, Greg's ideas about how we were teaching students to think in distorted ways, that led us to write the "Coddling the American Mind," which wasn't primarily about social media either.

It was about, you know, this sort of a rise of depression, anxiety. But after that, things got so much worse everywhere. And that's when I began to think like, whoa, something systemically has changed. Something has changed about the fabric of the social universe. And so ever since then, I've been focused on social media.

- So we're going to try to sneak up to the problems and the solutions at hand from different directions. I have a lot of questions whether it's fundamentally the nature of social media that's the problem, it's the decisions of various human beings that lead the social media companies that's the problem.

Is there still some component that's highlighted in the "Coddling of the American Mind" that's the individual psychology at play or the way parenting and education works to make sort of emphasize anti-fragility of the human mind as it interacts with the social media platforms and the other humans through the social.

So all that beautiful mess. - That should take us an hour or two to cover. - Or maybe a couple of years, yes. But so let's start if it's okay. You said you wanted to challenge some of the things that Mark Zuckerberg has said in a conversation with me.

What are some of the ideas he expressed that you disagree with? - Okay, there are two major areas that I study. One is what is happening with teen mental health? It fell off a cliff in 2013, it was very sudden. And then the other is what is happening to our democratic and epistemic institutions?

That means knowledge generating like universities, journalism. So my main areas of research where I'm collecting the empirical research and trying to make sense of it is what's happened to teen mental health and what's the evidence that social media is a contributor? And then the other areas, what's happening to democracies, not just America, and what's the evidence that social media is a contributor to the dysfunction?

So I'm sure we'll get to that 'cause that's what the Atlantic article is about. But if we focus first on what's happened to teen mental health. So before I read the quotes from Mark, I'd like to just give the overview. And it is this. There's a lot of data tracking adolescents.

There's self-reports of how depressed, anxious, lonely. There's data on hospital admissions for self-harm. There's data on suicide. And all of these things, they bounce around somewhat, but they're relatively level in the early 2000s. And then all of a sudden, around 2010 to 2013, depending on which statistic you're looking at, all of a sudden, they begin to shoot upwards.

More so for girls in some cases, but on the whole, it's like up for both sexes. It's just that boys have lower levels of anxiety and depression. So the curve is not quite as dramatic. But what we see is not small increases. It's not like, oh, 10%, 20%. No, the increases are between 50 and 150%, depending on which group you're looking at.

Suicide for preteen girls, thankfully, it's not very common, but it's two to three times more common now. Or by 2015, it had doubled. Between 2010 and 2015, it doubled. So something is going radically wrong in the world of American preteens. So as I've been studying it, I found, first of all, it's not just America.

It's identical in Canada and the UK. Australia and New Zealand are very similar. They're just after a little delay. So whatever we're looking for here, but yet it's not as clear in the Germanic countries. In continental Europe, it's a little different, and we can get into that when we talk about childhood.

But something's happening in many countries, and it started right around 2012, 2013. It wasn't gradual. It hit girls hardest, and it hit preteen girls the hardest. So what could it be? Nobody has come up with another explanation. Nobody. It wasn't the financial crisis. That wouldn't have hit preteen girls the hardest.

There is no other explanation. The complexity here in the data is, of course, as everyone knows, correlation doesn't prove causation. The fact that television viewing was going up in the '60s and '70s doesn't mean that that was the cause of the crime. So what I've done, and this is Rook with Jean Twenge, who wrote the book "iGen," is because I was challenged, when Greg and I put out the book, "The Coddling of the American Mind," some researchers challenged us and said, "Oh, you don't know what you're talking about.

"The correlations between social media use "and mental health, they exist, but they're tiny. "It's like a correlation coefficient of .03, "or a beta of .05, tiny little things." And one famous article said, "It's no bigger than the correlation "of bad mental health and eating potatoes," which exists, but it's so tiny it's zero, essentially.

And that claim, that social media's no more harmful than eating potatoes or wearing eyeglasses, it was a very catchy claim, and it's caught on, and I keep hearing that. But let me unpack why that's not true, and then we'll get to what Mark said. 'Cause what Mark basically said, here, I'll actually read it.

- And by the way, just to pause real quick, is you implied, but just to make it explicit, that the best explanation we have now, as you're proposing, is that a very particular aspect of social media is the cause, which is not just social media, but the like button and the retweet, a certain mechanism of virality that was invented, or perhaps some aspect of social media is the cause.

- Okay, good idea. Let's be clear. Connecting people is good. I mean, overall, the more you connect people, the better. Giving people the telephone was an amazing step forward. Giving them free telephone, free long distance, is even better. Video is, I mean, so connecting people is good. I'm not a Luddite.

And social media, at least the idea of users posting things, like that happens on LinkedIn, and it's great. It can serve all kinds of needs. What I'm talking about here is not the internet. It's not technology. It's not smartphones. And it's not even all social media. It's a particular business model in which people are incentivized to create content.

And that content is what brings other people on. And the people on there are the product which is sold to advertisers. It's that particular business model which Facebook pioneered, which seems to be incredibly harmful for teenagers, especially for young girls, 10 to 14 years old is where they're most vulnerable.

And it seems to be particularly harmful for democratic institutions because it leads to all kinds of anger, conflict, and the destruction of any shared narrative. So that's what we're talking about. We're talking about Facebook, Twitter. I don't have any data on TikTok. I suspect it's gonna end up being, having a lot of really bad effects because the teens are on it so much.

And to be really clear, since we're doing the nuance now in this section, lots of good stuff happens. There's a lot of funny things on Twitter. I use Twitter because it's an amazing way to put out news, to put out when I write something, you and I use it to promote things.

We learn things quickly. - Well, there's gonna be, now this is harder to measure. And we'll probably, I'll try to mention it because so much of our conversation will be about rigorous criticism. I'll try to sometimes mention what are the possible positive effects of social media in different ways.

So for example, in the way I've been using Twitter, not the promotion or any of that kind of stuff, it makes me feel less lonely to connect with people, to make me smile, a little bit of humor here and there. And that at scale is a very interesting effect, being connected across the globe, especially during times of COVID and so on.

It's very difficult to measure that. So we kind of have to consider that and be honest. There is a trade-off. We have to be honest about the positive and the negative. And sometimes we're not sufficiently positive or in a rigorous scientific way about the, we're not rigorous in a scientific way about the negative.

And that's what we're trying to do here. And so that brings us to the Mark Zuckerberg email. - Okay, but wait, let me just pick up on the issue of trade-offs because people might think like, well, how much of this do we need? If we have too much, it's bad.

No, that's a one-dimensional conceptualization. This is a multi-dimensional issue. And a lot of people seem to think like, oh, what would we have done without social media during COVID? Like we would have been sitting there alone in our homes. Yeah, if all we had was texting, telephone, Zoom, Skype, multiplayer video games, WhatsApp, all sorts of ways of communicating with each other.

Oh, and there's blogs and the rest of the internet. Yeah, we would have been fine. Did we really need the hyper-viral platforms of Facebook and Twitter? Now those did help certain things get out faster. And that did help science Twitter sometimes, but it also led to huge explosions of misinformation and the polarization of our politics to such an extent that a third of the country, you know, didn't believe what the medical establishment was saying.

And we'll get into this. The medical establishment sometimes was playing political games that made them less credible. So on net, it's not clear to me, if you've got the internet, smartphones, blogs, all of that stuff, it's not clear to me that adding in this particular business model of Facebook, Twitter, TikTok, that that really adds a lot more.

- And one interesting one we'll also talk about is YouTube. I think it's easier to talk about Twitter and Facebook. YouTube is another complex beast that's very hard to, 'cause YouTube has many things. It's a content platform, but it also has a recommendation system. That's, let's focus our discussion on perhaps Twitter and Facebook, but you do in this large document that you're putting together on social media called "Social Media and Political Dysfunction Collaborative Review" with Chris Bale.

That includes, I believe, papers on YouTube as well. - It does, but yeah, again, just to finish up with the nuance, yeah, YouTube is really complicated because I can't imagine life without YouTube. It's incredibly useful. It does a lot of good things. It also obviously helps to radicalize terrorist groups and murderers.

So I think about YouTube the way I think about the internet in general, and I don't know enough to really comment on YouTube. So I have been focused, and it's also interesting. One thing we know is teen social life changed radically between about 2010 and 2012. Before 2010, they weren't mostly on every day 'cause they didn't have smartphones yet.

By 2012 to '14, that's the area in which they almost all get smartphones, and they become daily users of the, what? So the girls go to Instagram and Tumblr. They go to the visual ones. The boys go to YouTube and video games. Those don't seem to be as harmful to mental health or even harmful at all.

It's really Tumblr, Instagram particularly, that seem to really have done in girls' mental health. So now, okay, so let's look at the quote from Mark Zuckerberg. So at 64 minutes and 31 seconds on the video, I time-coded this. - This is excellent. - This is the very helpful YouTube transcript.

YouTube's an amazing program. You ask him about Francis Haugen. You give him a chance to respond. And here's the key thing. So he talks about what Francis Haugen said. He said, "No, but that's mischaracterized. "Actually, on most measures, the kids are doing better "when they're on Instagram. "It's just on one out of the 18." And then he says, "I think an accurate characterization "would have been that kids using Instagram," or not kids, but teens, "is generally positive for their mental health." That's his claim, that Instagram is overall, taken as a whole, Instagram is positive for their mental health.

That's what he says. Now, is it really? Is it really? So first, just a simple, okay, now here, what I'd like to do is turn my attention to another document that we'll make available. So I was invited to give testimony before a Senate subcommittee two weeks ago, where they were considering the Platform Accountability Act.

Should we force the platforms to actually tell us what our kids are doing? Like, we have no idea, other than self-report. We have no idea. They're the only ones who know, like, the kid does this, and then over the next hours, the kid's depressed or happy. We can't know that, but Facebook knows it.

So should they be compelled to reveal the data? We need that. - So you raised just, to give people a little bit of context, and this document is brilliantly structured with questions, studies that indicate that the answer to a question is yes, indicate that the answer to a question is no, and then mixed results.

And questions include things like, does social media make people more angry or effectively polarized? - Right, wait, so that's the one that we're gonna get to. That's the one for democracy. - Yes, that's for democracy. - So I've got three different Google Docs here, 'cause I found this is an amazing way, and thank God for Google Docs.

It's an amazing way to organize the research literature, and it's a collaborative review, meaning that, so on this one, Gene Twenge and I put up the first draft, and we say, please, comment, add studies, tell us what we missed. And it evolves in real time. - In any direction, the yes or the no.

- Oh yeah, we specifically encourage, 'cause look, the center of my research is that our gut feelings drive our reasoning. That was my dissertation, that was my early research. And so if Gene Twenge and I are committed to, but we're gonna obviously preferentially believe that these platforms are bad for kids, 'cause we said so in our books.

So we have confirmation bias, and I'm a devotee of John Stuart Mill, the only cure for confirmation bias is other people who have a different confirmation bias. So these documents evolve because critics then say, no, you missed this, or they say, you don't know what you're talking about. It's like, great, say so, tell us.

So I put together this document, and I'm gonna put links to everything on my website. If listeners, viewers go to jonathanheit.com/socialmedia. It's a new page I just created. I'll put everything together in one place there, and we'll put those in the show notes. - Like links to this document, and other things like it that we're talking about.

- That's right, exactly. So yeah, so the thing I wanna call attention to now is this document here with the title, Teen Mental Health is Plummeting, and Social Media is a Major Contributing Cause. So Ben Sass and Chris Coons are on the Judiciary Committee. They had a subcommittee hearing on Nate Priscilli's bill, Platform Accountability Transparency Act.

So they asked me to testify on what do we know, what's going on with teen mental health? And so what I did was I put together everything I know with plenty of graphs to make these points. That first, what do we know about the crisis? Well, that the crisis is specific to mood disorders, not everything else.

It's not just self-report, it's also behavioral data, because suicide and self-harm go skyrocketing after 2010. The increases are very large, and the crisis is gendered, and it's hit many countries. So I go through the data on that. So we have a pretty clear characterization, and nobody's disputed me on this part.

- So can we just pause real quick, just so for people who are not aware. So self-report, just how you kind of collect data on this kind of thing. - Sure. - You have a self-report, a survey, you ask people-- - Yeah, how anxious are you these days? - Yeah.

- How many hours a week do you use social media? - That kind of stuff. And you do, it's maybe, you can collect large amounts of data that way, 'cause you can ask a large number of people that kind of question. And but then there's, I forget the term you use, but more, so non-self-report data.

- Behavioral data. - Behavioral data, that's right. Where you actually have self-harm and suicide numbers. - Exactly. So there are a lot of graphs like this. So this is from the National Survey on Drug Use and Health. So the federal government, and also Pew and Gallup, there are a lot of organizations that have been collecting survey data for decades.

So this is a gold mine. And what you see on these graphs over and over again is relatively straight lines up until around 2010 or 2012. - And on the X-axis, we have time, years going from 2004 to 2020. And the Y-axis is the percent of US teens who had a major depression in the last year.

- That's right. So when this data started coming out around, so Jean Twain's book, "I, Jen," 2017, a lot of people said, "Oh, she doesn't know "what she's talking about. "This is just self-report." Like, Gen Z, they're just really comfortable talking about this. This is a good thing. This isn't a real epidemic.

And literally the day before my book with Greg was published, the day before, there was a psychiatrist in "New York Times" who had an op-ed saying, "Relax. "Smartphones are not ruining your kid's brain." And he said, "It's just self-report. "It's just that they're giving higher rates. "There's more diagnosis.

"But underlying, there's no change." No, because it's theoretically possible, but all we have to do is look at the hospitalization data for self-harm and suicide, and we see the exact same trends. We see also a very sudden, big rise around between 2009 and 2012. You have an elbow, and then it goes up, up, up.

So, and that is not self-report. Those are actual kids admitted to hospitals for cutting themselves. So we have a catastrophe, and this was all true before COVID. COVID made things worse, but we have to realize, you know, COVID's going away. Kids are back in school, but we're not gonna go back to where we were because this problem is not caused by COVID.

What is it caused by? Well, just again, to just go through the point, then I'll stop. I just feel like I just really wanna get out the data to show that Mark is wrong. - This is amazing. - So first point, correlational studies consistently show a link. They almost all do, but it's not big.

Equivalent to a correlation coefficient around 0.1, typically. That's the first point. The second point is that the correlation is actually much larger than for eating potatoes. So that famous line wasn't about social media use. That was about digital media use. That included watching Netflix, doing homework on everything. And so what they did is they looked at all screen use, and then they said, "This is correlated "with self-reports of depression, anxiety." Like, you know, 0.03, it's tiny.

And they said that clearly in the paper, but the media has reported it as social media is 0.03, or tiny, and that's just not true. What I found digging into it, you don't know this until you look at the, there's more than 100 studies in the Google Doc. Once you dig in, what you see is, okay, you see a tiny correlation.

What happens if we zoom in on just social media? It always gets bigger, often a lot bigger, two or three times bigger. What happens if we zoom in on girls and social media? It always gets bigger, often a lot bigger. And so what I think we can conclude, in fact, what one of the authors of the potato studies herself concludes, Amy Orban says, I think I have a quote from here, she reviewed a lot of studies, and she herself said that, quote, "The associations between social media use and wellbeing therefore range from about R equals 0.15 to R equals 0.10." So that's the range we're talking about.

And that's for boys and girls together. And a lot of research, including hers and mine, show that girls, it's higher. So for girls, we're talking about correlations around 0.15 to 0.2, I believe. Gene Twenge and I found it's about 0.2 or 0.22. Now this might sound like an arcane social science debate, but people have to understand, public health correlations are almost never above 0.2.

So the correlation of childhood exposure to lead and adult IQ, very serious problem, that's 0.09. Like the world's messy and our measurements are messy. And so if you find a consistent correlation of 0.15, like you would never let your kid do that thing. That actually is dangerous. And it can explain, when you multiply it over tens of millions of kids spending, you know, years of their lives, you actually can explain the mental health epidemic just from social media use.

- Well, and then there's questions. By the way, this is really good to learn because I quit potatoes and it had no effect on me. And as a Russian, that was a big sacrifice. They're quite literal actually, because I'm mostly eating keto these days. But that's funny that they're actually literally called the potato studies.

Okay, but given this, and there's a lot of fascinating data here, there's also a discussion of how to fix it. What are the aspects that if fixed would start to reverse some of these trends? So if we just linger on the Mark Zuckerberg statements. So first of all, do you think Mark is aware of some of these studies?

So if you put yourself in the shoes of Mark Zuckerberg and the executives at Facebook and Twitter, how can you try to understand the studies? Like the Google Docs you put together to try to make decisions that fix things? Is there a stable science now that you can start to investigate?

And also maybe if you can comment on the depth of data that's available, because ultimately, and this is something you argue, that the data should be more transparent, should be provided. But currently, if it's not, all you have is maybe some leaks of internal data. - That's right, that's right.

And we could talk about the potential. You have to be very sort of objective about the potential bias in those kinds of leaks. You want to, it would be nice to have a non-leak data. Like-- - Yeah, it'd be nice to be able to actually have academic researchers able to access in de-individuated, de-identified form the actual data on what kids are doing and how their mood changes.

And when people commit suicide, what was happening before. And it'd be great to know that. We have no idea. - So how do we begin to fix social media, would you say? - Okay, so here's the most important thing to understand. In the social sciences, we say, is social media harmful to kids?

That's a broad question. You can't answer that directly. You have to have much more specific questions. You have to operationalize it and have a theory of how it's harming kids. And so almost all of the research is done on what's called the dose response model. That is, everybody, including most of the researchers, are thinking about this like, let's treat this like sugar.

You know, 'cause the data usually shows a little bit of social media use isn't correlated with harm, but a lot is. So, you know, think of it like sugar. And if kids have a lot of sugar, then it's bad. So how much is okay? But social media is not like sugar at all.

It's not a dose response thing. It's a complete rewiring of childhood. So we evolved as a species in which kids play in mixed age groups. They learn the skills of adulthood. They're always playing and working and learning and doing errands. That's normal childhood. That's how you develop your brain.

That's how you become a mature adult, until the 1990s. In the 1990s, we dropped all that. We said, it's too dangerous. If we let you outside, you'll be kidnapped. So we completely, we began rewiring childhood in the '90s before social media. And that's a big part of the story.

I'm a big fan of Lenore Scanese, who wrote the book "Free-Range Kids." If there are any parents listening to this, please buy Lenore's book "Free-Range Kids," and then go to letgrow.org. It's a nonprofit that Lenore and I started with Peter Gray and Daniel Shookman to help change the laws and the norms around letting kids out to play.

They need free play. So that's the big picture. They need free play. And we started stopping that in the '90s that we reduced it. And then Gen Z, kids born in 1996, they're the first people in history to get on social media before puberty. Millennials didn't get it until they were in college.

But Gen Z, they get it 'cause you can lie. You just lie about your age. So they really began to get on around 2009, 2010, and boom, two years later, they're depressed. It's not because they ate too much sugar necessarily. It's because even normal social interactions that kids had in the early 2000s, largely, well, they decline because now everything's through the phone.

And that's what I'm trying to get across, that it's not just a dose response thing. It's imagine one middle school where everyone has an Instagram account and it's constant drama. Everyone's constantly checking and posting and worrying and imagine going through puberty that way versus imagine there was a policy, no phones in school.

You have to check them in a locker. No one can have an Instagram account. All the parents are on board. Parents only let their kids have Instagram because the kid says everyone else has it. And we're stuck in a social dilemma. We're stuck in a trap. So what's the solution?

Keep kids off until they're done with puberty. There's a new study actually by Amy Orban and Andy Shabilsky showing that the damage is greatest for girls between 11 and 13. So there is no way to make it safe for preteens or even 13, 14 year olds. We've gotta, kids should simply not be allowed on these business models where you're the product.

They should not be allowed until you're 16. We need to raise the age and enforce it. That's the biggest thing. - So I think that's a really powerful solution, but it makes me wonder if there's other solutions like controlling the virality of bullying. Sort of if there's a way that's more productive to childhood to use social media.

So of course one thing is putting your phone down, but first of all from the perspective of social media companies, it might be difficult to convince them to do so. And also for me as an adult who grew up without social media, social media is a source of joy.

So I wonder if it's possible to design the mechanisms both challenge the ad driven model, but actually just technically the recommender system and how virality works on these platforms. If it's possible to design a platform that leads to growth, anti-fragility, but does not lead to depression, self-harm, and suicide.

Like finding that balance and making that as the objective function, not engagement or-- - Yeah, I don't think that can be done for kids. So I am very reluctant to tell adults what to do. I have a lot of libertarian friends and I would lose their friendship if I started saying, oh, it's bad for adults and we should stop adults from using it.

But by the same token, I'm very reluctant to have Facebook and Instagram tell my kids what to do without me even knowing or without me having any ability to control it. As a parent, it's very hard to stop your kid. I have stopped my kids from getting on Instagram and that's caused some difficulties, but they also have thanked me 'cause they see that it's stupid.

They see that what the kids are really on it, what they post, they see that the culture of it is stupid, as they say. So I don't think there's a way to make it healthy for kids. I think there's one thing which is healthy for kids, which is free play.

We already robbed them of most of it in the '90s. The more time they spend on their devices, the less they have free play. Video games is a kind of play. I'm not saying that these things are all bad, but 12 hours of video game play means you don't get any physical play.

So anyway-- - To me, physical play is the way to develop physical antifragility. - And especially social skills. Kids need huge amounts of conflict with no adult to supervise or mediate, and that's why we robbed them of. So anyway, we should move on 'cause I get really into the evidence here because I think the story's actually quite clear now.

There was a lot of ambiguity. There are conflicting studies, but when you look at it all together, the correlational studies are pretty clear and the effect sizes are coming in around 0.1 to 0.15, whether you call that a correlation coefficient or a beta. It's all standardized beta. It's all in that sort of range.

There's also experimental evidence. We collect true experiments with random assignment and they mostly show an effect. And there's eyewitness testimony. With the kids themselves, you talk to girls and you poll them. Do you think overall Instagram is good for your mental health or bad for it? You're not gonna find a group saying, "Oh, it's wonderful.

Oh yeah, yeah, Mark, you're right. It's mostly good." No, the girls themselves say, "This is the major reason." And I've got studies in the Google Doc where there've been surveys. What do you think is causing depression and anxiety? And the number one thing they say is social media. So there's multiple strands of evidence.

- Do you think the recommendation is, as a parent, that teens should not use Instagram, Twitter? - Yes. - That's ultimately, maybe in the longterm, there's a more nuanced solution. - There's no way to make it safe. It's unsafe at any speed. I mean, it might be very difficult to make it safe.

And then the short term, while we don't know how to make it safe, put down the phone. - Well, hold on a second. Play with other kids via a platform like Roblox or multiplayer video games. That's great. I have no beef with that. You focus on bullying before. That's one of five or seven different avenues of harm.

The main one I think, which does in the girls, is not being bullied. It's living a life where you're thinking all the time about posting. Because once a girl starts posting, so it's bad enough that they're scrolling through, and this is, everyone comments on this, you're scrolling through and everyone's life looks better than yours because it's fake and all that you see are the ones the algorithm picked that were the night.

Anyway, so the scrolling, I think, is bad for the girls. But I'm beginning to see, I can't prove this, but I'm beginning to see from talking to girls, from seeing how it's used, is once you start posting, that takes over your mind. And now you're, basically, you're no longer present.

Because even if you're only spending five or six hours a day on Instagram, you're always thinking about it. And when you're in class, you're thinking about how are people responding to the post that I made between period, between classes. I mean, I do it. I try to stay off Twitter for a while, but now I've got this big article, I'm tweeting about it, and I can't help it.

I check 20 times a day, I'll check. Like, what are people saying? What are people saying? This is terrible. And I'm a 58-year-old man. Imagine being a 12-year-old girl going through puberty, you're self-conscious about how you look. And I see some young women, I see some professional young women, women in their 20s and 30s, who are putting up sexy photos of themselves.

Like, and this is so sad, so sad, don't be doing this. - Yeah, see, the thing where I disagree a little bit is, I agree with you in the short term, but in the long term, I feel it's the responsibility of social media, not in some kind of ethical way, not just in an ethical way, but it'll actually be good for the product and for the company to maximize the long-term happiness and well-being of the person.

So not just engagement. So consider-- - But the person is not the customer. So the thing is not to make them happy, it's to keep them on. - That's the way it is currently, that driven. - If we can get a business model, as you're saying, I'd be all for it.

- And I think that's the way to make much more money. - So like a subscription model, where the money comes from paying? - It's not-- - That would work, wouldn't it? That would help. - So subscription definitely would help, but I'm not sure it's so much, I mean, a lot of people say it's about the source of money, but I just think it's about the fundamental mission of the product.

If you want people to really love a thing, I think that thing should maximize your long-term well-being. - It should, in theory, in morality land, it should. - I don't think it's just morality land. I think in business land, too. But that's maybe a discussion for another day. We're studying the reality of the way things currently are, and they are as they are, as the studies are highlighting.

So let us go then in from the land of mental health for young people to the land of democracy. By the way, in these big umbrella areas, is there a connection, is there a correlation between the mental health of a human mind and the division of our political discourse?

- Oh yes, oh yes. So our brains are structured to be really good at approach and avoid. So we have circuits, the front left circuit, this is an oversimplification, but there's some truth to it. There's what's called the behavioral activation system, front left cortex. It's all about approach, opportunity, kid in a candy store.

And then the front right cortex has circuits specialized for withdrawal, fear, threat. And of course, students, I'm a college professor, and most of us think about our college days like, you know, yeah, we were anxious at times, but it was fun. And it was like, I can take all these courses, I can do all these clubs, I know all these people.

Now imagine if in 2013, all of a sudden, students are coming in with their front right cortex hyperactivated, everything's a threat, everything is dangerous, there's not enough to go around. So the front right cortex puts us into what's called defend mode as opposed to discover mode. Now let's move up to adults.

Imagine a large, diverse, secular, liberal democracy in which people are most of the time in discover mode. And you know, we have a problem, hmm, let's think how to solve it. And this is what de Tocqueville said about Americans, like, there's a problem, we get together, we figure out how to solve it.

And he said, whereas in England and France, people would wait for the king to do it. But here, like, you know, let's roll up our sleeves, let's do it. That's the can-do mindset. That's front left cortex discover mode. If you have a national shift of people spending more time in defend mode, now you, so everything that comes up, whatever anyone says, you're not looking like, oh, is there something good about it?

You're thinking, you know, how is this dangerous? How is this a threat? How is this violence? How can I attack this? How can I, you know, so if you imagine, you know, God up there with a little lever, like, okay, let's push everyone over into, you know, more into discover mode.

And it's like joy breaks out, age of Aquarius. All right, let's shift them back into, let's put everyone in defend mode. And I can't think of a better way to put people in defend mode than to have them spend some time on partisan or political Twitter, where it's just a stream of horror stories, including videos about how horrible the other side is.

And it's not just that they're bad people. It's that if they win this election, then we lose our country or then it's catastrophe. So Twitter, and again, we're not saying all of Twitter, you know, most people aren't on Twitter and people that are mostly not talking about politics, but the ones that are on talking about politics are flooding us with stuff.

All the journalists see it. All the mainstream media is hugely influenced by Twitter. So if we put everyone, if there's more sort of anxiety, sense of threat, this colors everything. And then you're not, you know, the great thing about a democracy and especially, you know, a legislature that has some diversity in it, is that the art of politics is that you can grow the pie and then divide it.

You don't just fight zero sum. You find ways that we can all get 60% of what we want. And that ends when everyone's anxious and angry. - So let's try to start to figure out who's to blame here. Is it the nature of social media? Is it the decision of the people at the heads of social media companies that they're making in the detailed engineering designs of the algorithm?

Is it the users of social media that drive narratives like you mentioned journalists, that want to maximize drama in order to drive clicks to their off-site articles? Is it just human nature that loves drama, can't look away from an accident when you're driving by? Is there something to be said about, the reason I ask these questions is to see, can we start to figure out what the solution would be to alleviate, to deescalate the- - Not yet, not yet.

Let's first, we have to understand, as we did on the teen mental health thing, okay, now let's lay out what is the problem? What's messing up our country? And then we can talk about solutions. So it's all the things you said, interacting in an interesting way. So human nature is tribal.

We evolved for intergroup conflict. We love war. The first time my buddies and I played paintball, I was 29. And we were divided into teams with strangers to shoot guns at each other and kill each other. And we all, afterwards, it was like, oh my God, that was incredible.

Like it really felt like we'd opened a room in our hearts that had never been opened. But as men, testosterone changes our brains and our bodies and activates the war stuff, like we've got more stuff. And that's why boys like certain team sports, it's play war. So that's who we are.

It doesn't mean we're always tribal. It doesn't mean we're always wanting to fight. We're also really good at making peace and cooperation and finding deals. We're good at trade and exchange. So you want your country to, you want a society that has room for conflict, ideally over sports, like that's great.

That's totally, it's not just harmless, it's actually good. But otherwise you want cooperation to generally prevail in the society. That's how you create prosperity and peace. And if you're gonna have a diverse democracy, you really better focus on cooperation, not on tribalism and division. And there's a wonderful book by Yasha Monk called "The Great Experiment" that talks about the difficulty of diversity and democracy and what we need to do to get this right and to get the benefits of diversity.

So that's human nature. Now let's imagine that the technological environment makes it really easy for us to cooperate. Let's give everyone telephones and the postal service. Let's give them email, like, wow, we can do all these things together with people far away. It's amazing. Now, instead of that, let's give them a technology that encourages them to fight.

So early Facebook and Twitter were generally lovely places. People old enough to remember, like, they were fun. There was a lot of humor. You didn't feel like you were gonna get your head blown off no matter what you said. 2007, 2008, 2009, it was still fun. These were nice places mostly.

And like almost all the platforms start off as nice places. But, and this is the key thing in the article, in the Atlantic article on Babel, on after Babel. - The Atlantic article, by the way, is why the past 10 years of American life have been uniquely stupid. - Yeah, my title in the magazine was, After Babel, Adapting to a World We Can No Longer Share, is what I proposed.

But they A/B tested, what's the title that gets the most clicks, and it was why the past 10 years have been unique. - So Babel, the Tower of Babel, is a driving metaphor in the piece. So first of all, what is it, what's the Tower of Babel? What's Babel, what are we talking about?

- Okay, so the Tower of Babel is a story early in Genesis where the descendants of Noah are spreading out and repopulating the world. And they're on the plain of Shinar, and they say, "Let us build us a city with a tower to make a name for ourselves, lest we be scattered again." And so it's a very short story, there's not a lot in it, but it looks like they're saying, we don't want God to flood us again, let's build a city and a tower to reach the heavens.

And God is offended by the hubris of these people acting again like gods. And he says, and here's the key line, he says, "Let us go down and confuse their language so that they may not understand one another." So in the story, he doesn't literally knock the tower over, but many of us have seen images or movie dramatizations where a great wind comes and the tower is knocked over and the people are left wandering amid the rubble, unable to talk to each other.

So I've been grappling, I've been trying to say, what the hell happened to our society? Beginning in 2014, what the hell is happening to universities? And then it spread out from universities, it hit journalism, the arts, and now it's all over companies. What the hell happened to us? And it wasn't until I reread the Babel story a couple of years ago that I thought, whoa, this is it, this is the metaphor.

Because I'd been thinking about tribalism and left-right battles and war, and that's easy to think about. But Babel isn't like, and God said, "Let half of the people hate the other half." No, it wasn't that. It's God said, "Let us confuse their language "that they, none of them can understand each other "ever again, or at least for a while." So it's a story about fragmentation.

And that's what's unique about our time. So Meta or Facebook wrote a rebuttal to my article. They disputed what I said, and one of their arguments was, oh, but polarization goes back way before social media and it was happening in the '90s. And they're right, it does. And I did say that, but I should have said it more clearly with more examples.

But here's the new thing. Even though left and right were beginning to hate each other more, we weren't afraid of the person next to us. We weren't afraid of each other. Cable TV, Fox News, whatever you want to point to about increasing polarization, it didn't make me afraid of my students.

And that was new in around 2014, 2015. We started hearing, getting articles, I'm a liberal professor and my liberal students frightened me. It was in Vox in 2015. And that was after Greg and I had turned in the draft of our first draft of our coddling article. And surveys show over and over again, students are not as afraid of their professors, they're actually afraid of other students.

Most students are lovely. It's not like the whole generation has lost their minds. What happens, a small number, a small number are adept at using social media to destroy anyone that they think they can get credit for destroying. And the bizarre thing is it's never, it's rarely about what ideas you express.

It's usually about a word, like he used this word, or this was insensitive, or I can link this word to that word. So they don't even engage with ideas and arguments. It's a real sort of gotcha, prosecutable, it's like a witch trial mindset. - So the unique thing here, there's something about social media in those years that a small number of people can sort of be catalysts for this division.

They can start the viral wave that leads to a division that's different than the kind of division we saw before. - It's a little different than a viral wave. Once you get some people who can use social media to intimidate, you get a sudden phase shift. You get a big change in the dynamics of groups.

And that's the heart of the article. This isn't just another article about how social media is polarizing us and destroying democracy. The heart of the article is an analysis of what makes groups smart and what makes them stupid. And so because, as we said earlier, my own research is on post-hoc reasoning, post-hoc justification, rationalization.

The only cure for that is other people who don't share your biases. And so if you have an academic debate, as like the one I'm having with these other researchers over social media, I write something, they write something, I have to take account of their arguments and they have to take account of mine.

When the academic world works, it's because it puts us together in ways that things cancel out. That's what makes universities smart. It's what makes them generators of knowledge. Unless we stop dissent, what if we say, on these topics, there can be no dissent. And if anyone says otherwise, if any academic comes up with research that says otherwise, we're gonna destroy them.

And if any academic even tweets a study contradicting what is the official word, we're gonna destroy them. And that was the famous case of David Shore, who in the days after George Floyd was killed and there were protests, and the question is, are these protests gonna be productive? Are they gonna backfire?

Now, most of them were peaceful, but some were violent. And he tweeted a study, he just simply tweeted a study done by an African-American, I think sociologist at Princeton, Omar Wasow. And Wasow's study showed that when you look back at the '60s, you see that where there were violent protests, it tended to backfire, peaceful protests tend to work.

And so he simply tweeted that study. And there was a Twitter mob after him, this was insensitive, this was anti-black, I think he was accused of, and he was fired within a day or two. So this is the kind of dynamic. This is not caused by cable TV. This is not caused, this is something new.

- Can I, just on a small tangent there, in that situation, because it happens time and time again, you highlight in your current work, but also in the coddling of the American mind, is the blame on the mob, the mechanisms that enables the mob or the people that do the firing?

The administration does the firing. - Yeah, it's all of them. - Well, can I, I sometimes feel that we don't put enough blame on the people that do the firing. Which is, that feels like, in the long arc of human history, that is the place for courage and for ideals, right?

That's where it stops. That's where the buck stops. So there's going to be new mechanisms for mobs and all that kind of stuff. There's going to be tribalism. But at the end of the day, that's what it means to be a leader, is to stop the mob at the door.

- But I'm a social psychologist, which means I look at the social forces that work on people. And if you show me a situation in which 95% of the people behave one way, and it's a way that we find surprising and shameful, I'm not going to say, wow, 95% of the people are shameful.

I'm going to say, wow, what a powerful situation. We've got to change that situation. So that's what I think is happening here, because there are hardly any in the first few years, you know, it began around 2018, 2019, it really enters the corporate world. There are hardly any leaders who stood up against it.

But I've talked to a lot, and it's always the same thing. You have these, you know, people in their, usually in their 50s or 60s, generally they're progressive or on the left. They're accused of things by their young employees. They don't have the vocabulary to stand up to it.

And they give in very quickly. And because it happens over and over again, and there's only a few examples of university presidents who said like, no, we're not going to stop this talk just because you're freaking out. No, you know, we're not going to fire this professor because he wrote a paper that you don't like.

There are so few examples, I have to include that the situational forces are so strong. Now, I think we are seeing, we are seeing a reversal in the last few weeks or months. A clear sign of that is that the New York Times actually came out with an editorial from the editorial board saying that free speech is important.

Now that's amazing that the Times had the guts to stand up for free speech because, you know, they're the people, well, what's been happening with the Times is that they've allowed Twitter to become the editorial board. Twitter has control over the New York Times and the New York Times literally will change papers.

I have an essay in Politico with Nadine Strassen, Steve Pinker and Pamela Peresky on how the New York Times retracted and changed an editorial by Brett Stevens. And they did it in a sneaky way and they lied about it. And they did this out of fear because he mentioned IQ.

He mentioned IQ and Jews. And then he went on to say, it probably isn't a genetic thing, it's probably cultural, but he mentioned it. And the New York Times, I mean, they were really cowardly. Now, I think they, from what I hear, they know that they were cowardly. They know that they should not have fired James Bennett.

They know that they gave in to the mob. And that's why they're now poking their head up above the parapet and they're saying, oh, we think that free speech is important. And then of course they got their heads blown off 'cause Twitter reacted like, how dare you say this?

Are you saying racist speech is okay? But they didn't back down. They didn't retract it. They didn't apologize for defending free speech. So I think the Times might be coming back. - Can I ask you an opinion on something here? What, in terms of the Times coming back, in terms of Twitter being the editorial board for the prestigious journalistic organizations, what's the importance of the role of Mr.

Elon Musk in this? So, you know, it's all fun and games, but here's a human who tweets about the importance of freedom of speech and buys Twitter. What are your thoughts on the influence, the positive and the negative possible consequences of this particular action? - So, you know, if he is gonna succeed and if he's gonna be one of the major reasons why we decarbonize quickly and why we get to Mars, then I'm willing to cut him a lot of slack.

So I have an overall positive view of him. Now where I'm concerned and where I'm critical is we're in the middle of a raging culture war and this culture war is making our institutions stupid. It's making them fail. This culture war, I think, could destroy our country. And by destroy, I mean we could descend into constant constitutional crises, a lot more violence.

You know, not that we're gonna disappear, not that we're gonna kill each other, but I think there will be a lot more violence. So we're in the middle of this raging culture war. It's possibly turning to violence. You need to not add fuel to the fire. And the fact that he declared that he's gonna be a Republican and the Democrats are the bad party.

And, you know, as an individual citizen, he's entitled to his opinion, of course, but as an influential citizen, he should at least be thoughtful. And more importantly, companies need and I think would benefit from a Geneva Convention for the culture war in which, 'cause they're all being damaged by the culture war coming to the companies.

What we need to get to, I hope, is a place where companies do, they have strong ethical obligations about the effects that they cause, about how they treat their employees, about their supply chain. They have strong ethical obligations, but they should not be weighing in on culture war issues.

- Well, if I can read the exact tweet, 'cause part of the tweet I like, he said, "In the past, I voted Democrat "because they were mostly the kindness party, "but they have become the party of division and hate, "so I can no longer support them and will vote Republican." And then he finishes with, "Now watch their dirty tricks campaign against me unfold." Okay.

- What do you make of that? Like, what do you think he was thinking that he came out so blatantly as a partisan? - Because he's probably communicating with the board, with the people inside Twitter, and he's clearly seeing the lean. And he's responding to that lean. He's also opening the door to the potential, bringing back the former president onto the platform, and also bringing back, which he's probably looking at the numbers of the people who are behind Truth Social, saying that, okay, it seems that there's a strong lean in Twitter in terms of the left.

And in fact, from what I see, it seems like the current operation of Twitter is the extremes of the left get outraged by something, and the extremes of the right point out how the left is ridiculous. Like, that seems to be the mechanism. And that's the source of the drama, and then the left gets very mad at the right that points out the ridiculousness, and there's this vicious kind of cycle.

- That's the polarization cycle. That's what we're in. - There's something that happened here that there's a shift where, there's a decline, I would say, in both parties towards being shitty. - Okay, but look, with everything with the parties, that's not the issue. The issue is, should the most important CEO in America, the CEO of some of our biggest and most important companies, so let's imagine five years from now, two different worlds.

In one world, the CEO of every Fortune 500 company has said, I'm a Republican because I hate those douchebags, or I'm a Democrat because I hate those Nazi racists. That's one world where everybody, everybody puts up a thing in their window, everybody, it's culture war everywhere, all the time, 24 hours a day.

You pick a doctor based on whether he's red or blue. Everything is culture war. That's one possible future, which we're headed towards. The other is, we say, you know what? Political conflict should be confined to political spaces. There is a room for protest, but you don't go protesting at people's private homes.

You don't go threatening their children. You don't go doxing them. We have to have channels that are not culture war all the time. When you go shopping, when you go to a restaurant, you shouldn't be yelled at and screamed at. When you buy a product, you should be able to buy products from an excellent company.

You shouldn't have to always think, what's the CEO? What is the, I mean, what an insane world, but that's where we're heading. So I think that Elon did a really bad thing in launching that tweet. That was, I think, really throwing fuel on a fire and setting a norm in which businesses are gonna get even more politicized than they are.

- And you're saying specifically the problem was that he picked the side. - As the head of, yes, as the CEO, as the head of several major companies, of course we can find out what his views are. You know, it's not like it's, I mean, actually with him, it's maybe hard to know, but it's not that a CEO can't be a partisan or have views, but to publicly declare it in that way, in such a really insulting way, this is throwing fuel on the fire and it's setting a precedent that corporations are major players in the culture war.

And I'm trying to reverse that. We've got to pull back from that. - Let me play devil's advocate here. So, 'cause I've gotten a chance to interact with quite a few CEOs, there is also a value for authenticity. So I'm guessing this was written while sitting on the toilet and I could see in a day from now saying, "LOL, just kidding." There's a humor, there's a lightness, there's a chaos element, and that's, chaos is not-- - Yeah, that's not what we need right now.

We don't need more chaos. - Well, so yes, there's a balance here. The chaos isn't engineered chaos, it's really authentically who he is. And I would like to say that there's-- - I agree with that. - That's a trade-off, because if you become a politician, so there's a trade-off between, in this case, maybe authenticity and civility, maybe, like being calculating about the impact you have with your words versus just being yourself.

And I'm not sure calculating is also a slippery slope. Both are slippery slopes, you have to be careful. - So when we have conversations in a vacuum and we just say like, "What should a person do?" Those are very hard, but our world is actually structured into domains and institutions.

And if it's just like, "Oh, talking here among our friends, "like we should be authentic," sure. But the CEO of a company has fiduciary duties, legal fiduciary duties to the company, he owes loyalty to the company. And if he is using the company for his own political gain or other purposes or social standing, that's a violation of his fiduciary duty to the company.

Now there's debate among scholars whether your fiduciary duty is to the shareholders, I don't think it's the shareholders. I think many legal experts say the company's a legal person, you have duties to the company, employees owe a duty to the company. So he's got those duties and I think he, you can say he's being authentic, but he's also violating those duties.

So it's not necessarily he's violating a law by doing it, but he certainly is shredding any notion of professional ethics around leadership of a company in the modern age. - I think you have to take it in the full context because you see that he's not being a political player, he's just saying quit being douchey.

- Suppose the CEO of Ford says, "You know what, let's pick a group." Well, I shouldn't do a racial group 'cause that would be different. Let's just say, "You know what, "left-handed people are douchebags, I hate them." Like, why would you say that? Like, why would you drag away left-handed people?

- What you said now is not either funny or like-hearted because I hate them, it wasn't funny. I'm not picking on you, I'm saying that statement. Words matter, there's a lightness to the statement in the full context. If you look at the timeline of the man, there's ridiculous memes and there's nonstop jokes that my big problem with the CEO of Ford is there's never any of that.

Not only is there any of that, there's not a celebration of the beauty of the engineering of the different products. It's all political speak, channeled through multiple meetings of PR. There's levels upon levels upon levels where you think that it's really not authentic. And there, you're actually, by being polite, by being civil, you're actually playing politics because all of your actual political decision-making is done in the back channels.

That's obvious. Here, here's a human being authentic and actually struggling with some of the ideas and having fun with it. I think this lightness represents the kind of positivity that we should be striving for. It's funny to say that because you're looking at these statements and they seem negative, but in the full context of social media, I don't know if they are.

- But look at what you just said, in the full context. You're taking his tweets in context. You know who doesn't do that? Twitter. Like, that's the Twitter-- - Well, that's the problem you're highlighting. - The rule of Twitter is there is no context. Everything is taken in the maximum possible way.

There is no context. - Oh, you're saying-- - So this is not like, you know, yes, I wish we did take people in context. I wish we lived in that world. But now that we have Twitter and Facebook, we don't live in that world anymore. - So you're saying it is a bit of responsibility for people with a large platform to consider the fact that there is the fundamental mechanism of Twitter where people don't give you the benefit of the doubt.

- Well, I don't wanna hang it on large platform because then that's what a lot of people say. Like, well, you know, she shouldn't say that because she has a large platform and she should say things that agree with my politics. I don't wanna hang it on large platform.

I wanna hang it on CEO of a company. CEOs of a company have duties and responsibilities. And, you know, Scott Galloway, I think is very clear about this. You know, he criticized Elon a lot as being a really bad role model for young men. Young men need role models and he is a very appealing, attractive role model.

- So I agree with you, but in terms of being a role model, I think, I don't wanna put a responsibility on people, but yes, he could be a much, much better role model. There's-- - You can't insult sitting senators by calling them old. I mean, that's, you know.

- Yeah, I won't do a both-side-ism of like, well, those senators can be assholes too. - Yeah, yes, yes. - But that's fair enough. Respond intelligently, as I tweeted, to unintelligent treatment, yes, yes. So the reason I like, and he's now a friend, the reason I like Elon is because of the engineering, because of the work he does.

- No, I admire him enormously for that. - But what I admire on the Twitter side is the authenticity because I've been a little bit jaded and worn out by people who have built up walls, people in the position of power, the CEOs and the politicians who built up walls and you don't see the real person.

That's one of the reasons I love long-form podcasting, is especially if you talk more than 10 minutes, it's hard to have a wall up. It all kind of crumbles away. So I don't know, but yes, yes, you're right. That is a step backwards to say, at least to me, the biggest problem is to pick sides, to say I'm not going to vote this way or that way.

That's, like leave that to the politicians. You have much, like the importance of social media is far bigger than the bickering, the short-term bickering of any one political party. It's a platform where we make progress, where we develop ideas through sort of rigorous discourse, all those kinds of things.

- So, okay, so here's an idea about social media developed through social media from Elon, which is everyone freaks out because they think either, oh, he's going to do less content moderations. The left is freaking out because they want more content moderation. The right is celebrating because they think the people doing the content moderation are on the left.

But there was one, I think it was a tweet, where he said like three things he was going to do to make it better. And it was, I would defeat the bots or something. But he said, authenticate all humans. And this is a hugely important statement. And it's pretty powerful that this guy can put three words in a tweet.

And actually, I think this could change the world. Even if the bid fails, the fact that Elon said that, that he thinks we need to authenticate all humans is huge. Because now we're talking about solutions here. What can we do to make social media a better place for democracy, a place that actually makes democracy better?

As Tristan Harris has pointed out, social media and digital technology, the Chinese are using it really skillfully to make a better authoritarian nation. And by better, I don't mean morally better. I mean like more stable, successful. Whereas we're using it to make ourselves weaker, more fragmented, and more insane.

So we're on the way down. We're in big trouble. And all the argument is about content moderation. And what we learned from Francis Haugen is that what, five or 10% of what they might call hate speech gets caught, 1% of violence and intimidation. Content moderation, even if we do a lot more of it, isn't gonna make a big difference.

All the powers and the dynamics changes to the architecture. And as I said in my Atlantic article, what are the reforms that would matter for social media? And the number one thing I said, the number one thing I believe is user authentication or user verification. And people freak out and they say like, oh, but we need anonymous.

Like, yeah, fine, you can be anonymous. But what I think needs to be done is anyone can open an account on Twitter, Facebook, whatever, as long as you're over 16, and that's another piece. Once you're 16 or 18, at a certain age, you can be treated like an adult.

You can open an account and you can look, you can read, and you can make up whatever fake name you want. But if you want to post, if you want the viral amplification on a company that has section 230 protection from lawsuits, which is a very special privilege, I understand the need for it, but it's an incredibly powerful privilege to protect them from lawsuits.

If you want to be able to post on platforms that, as we'll get to in the Google Doc, there's a lot of evidence that they are undermining and damaging democracy. Then the company has this minimal responsibility it has to meet. Banks have know your customer laws. You can't just walk up to a bank with a bag of money that you stole and say, here, deposit this for me.

My name's John Smith. You have to actually show who you are. And the bank isn't gonna announce who you are publicly, but you have to, if they're gonna do business with you, they need to know you're a real person, not a criminal. And so there's a lot of schemes for how to do this.

There's multiple levels. People don't seem to understand this. Level zero of authentication is nothing. That's what we have now. Level one, this might be what Elon meant, authenticate all humans, meaning you have to at least pass a CAPTCHA or some test to show you're not a bot. There's no identity, there's nothing.

Just something that, you know, it's a constant cat and mouse struggle between bots. So we try to just filter out pure bots. The next level up, there are a variety of schemes that allow you to authenticate identity in ways that are not traceable or kept. So some, whether you show an ID, whether you use biometric, whether you have something on the blockchain that establishes identity, whether it's linked to a phone, whatever it is, there are multiple schemes now that companies have figured out for how to do this.

And so if you did that, then in order to get an account where you have posting privileges on Facebook or Twitter or TikTok or whatever, you have to at least do that. And if you do that, you know, now the other people are real humans too. And suddenly our public square's a lot nicer 'cause you don't have bots swarming around.

This would also cut down on trolls. You still have trolls who use their real name, but this would just make it a little scarier for trolls. Some men turn into complete assholes. They can be very polite in real life, but some men, as soon as they have the anonymity, they start using racial slurs.

They're horrible. One troll can ruin thousands of people's day. - You know, I'm somebody who believes in free speech. And so there's been a lot of discussions about this. And we'll ask you some questions about this too. But there is, the tension there is the power of a troll to ruin the party.

- Yeah, that's right. - So like this idea of free speech, boy, do you have to also consider if you wanna have a private party and enjoy your time, challenging, lots of disagreement, debate, all that kind of stuff, but fun. No like annoying person screaming, just not disagreeing, but just like spilling like the drinks all over the place.

Yeah, all that kind of stuff. So see, you're saying it's a step in the right direction to at least verify the humanness of a person while maintaining anonymity. So that's one step, but the further step, that's maybe doesn't go all the way because you can still figure out ways to create multiple accounts and you can-- - But it's a lot harder.

So actually there's a lot of ways to do this. There's a lot of creativity out there about solving this problem. So if you go to the social media and political dysfunction, Google Doc that I created with Chris Bale, and then you go to section 11, proposals for improving social media.

So we're collecting there now some of the ideas for how to do user authentication. And so one is WorldCoin, there's one human-id.org. This is a new organization created by an NYU Stern student who just came into my office last week, working with some other people. And what they do here is they have a method of identity verification that is keyed to your phone.

So you do have to have a phone number. And of course you can buy seven different phone numbers if you want, but it's gonna be about 20 or $30 a month. So nobody's gonna buy a thousand phones. So yeah, you don't have just one unique ID, but most people do and nobody has a thousand.

So just things like this that would make an enormous difference. So here's the way that I think about it. Imagine a public square in which the incentives are to be an asshole, that the more you kick people in the shins and spit on them and throw things at them, the more people applaud you.

Okay, so that's the public square we have now. Not for most people, but as you said, just one troll can ruin it for everybody. If there's a thousand of us in the public square and 10 are incentivized to kick us and throw shit at us, it's no fun to be in that public square.

So right now, I think Twitter in particular is making our public square much worse. It's making our democracy much weaker, much more divided, it's bringing us down. Imagine if we changed the incentives. Imagine if the incentive was to be constructive. And so this is an idea that I've been kicking around.

I talked about it with Reid Hoffman last week and he seemed to think it's a good idea. And it would be very easy to, rather than trying to focus on posts, what post is fake or whatever, focus on users, what users are incredibly aggressive. And so people just use a lot of obscenity and exclamation points.

AI could easily code nastiness or just aggression, hostility. And imagine if every user is rated on a one to five scale for that. And the default, when you open an account on Twitter or Facebook, the default is four. You will see everybody who's a four and below, but you won't even see the people who are fives and they don't get to see you.

So they can say what they want, free speech. We're not censoring them, they can say what they want. But now there's actually an incentive to not be an asshole. 'Cause the more of an asshole you are, the more people block you out. So imagine our country goes in two directions.

In one, things continue to deteriorate and we have no way to have a public square in which we could actually talk about things. And in the other, we actually try to disincentivize being an asshole and encourage being constructive. What do you think? - Well, this is because I'm an AI person.

And I very much, ever since I talked to Jack about the health of conversation, I've been looking at a lot of the machine learning models involved and I believe that the nastiness classification is a difficult problem automatically. - I'm sure it is. - So I personally believe in crowdsourced nastiness labeling.

But in an objective way where it doesn't become viral mob cancellation type of dynamics. But more sort of objectively, is this a shitty, almost out of context, with only local context. Is this a shitty thing to say at this moment? Because here's the thing. - Well, wait, no, but we don't care about individual posts.

- No, no, but-- - All that matters is the average. - The posts make the man. - They do, but the point is, as long as we're talking about averages here, one person has a misclassified post, it doesn't matter. - Right, right, right. Yeah, yeah, but you need to classify posts in order to build up the average.

That's what I mean. And so I really like that idea, the high level idea of incentivizing people to be less shitty. 'Cause that's what, we have that incentive in real life. - Yeah, that's right. - It's actually really painful to be a full-time asshole, I think, in physical reality.

- That's right, you'd be cut off. - It should be also pain to be an asshole on the internet. There could be different mechanisms for that. I wish AI was there, machine learning models were there. They just aren't yet. But how about we have, so one track is we have AI machine learning models and they render a verdict.

Another class is crowdsourcing. You get, and then whenever the two disagree, you have staff at Twitter or whatever, they look at it and they say, "What's going on here?" And that way you can refine both the AI and you can refine whatever the algorithms are for the crowdsourcing. Because of course that can be gamed and people can only, "Hey, let's all rate this guy as really aggressive." So you wouldn't want just to rely on one single track.

But if you have two tracks, I think you could do it. - What do you think about this word misinformation that maybe connects to our two discussions now? So one is the discussion of social media and democracy. And then the other is the coddling of the American mind. I've seen the word misinformation misused or used as a bullying word, like racism and so on, which are important concepts to identify, but they're nevertheless instead overused.

Does that worry you? Because that seems to be the mechanism from inside Twitter, from inside Facebook, to label information you don't like versus information that's actually fundamentally harmful to society. - So I think there is a meaning of disinformation that is very useful and helpful, which is when you have a concerted campaign by Russian agents to plant a story and spread it.

And they've been doing that since the '50s or '40s even. - That's what this podcast actually is. - Is what? (laughs) - It's a disinformation campaign by the Russians. - Yeah, you seem really Soviet to me, buddy. - It's subtle, it's between the lines. Okay, I'm sorry. - So I think to the extent that there are campaigns by either foreign agents or just by the Republican or Democratic parties, there have been examples of that, there are all kinds of concerted campaigns that are intending to confuse or spread lies.

This is the Soviet, the firehose of falsehood tactic. So it's very useful for that. All the companies need to have pretty large staffs, I think, to deal with that, 'cause that will always be there. And that is really bad for our country. So Renee Duresta is just brilliant on this.

Reading her work has really frightened me and opened my eyes about how easy it is to manipulate and spread misinformation and especially polarization. The Russians have been trying since the '50s, they would come to America and they would do hate crimes. They would spray swastikas in synagogues to make, and they spray anti-blackslurs.

They try to make Americans feel that they're as divided as possible. Most of the debate nowadays, however, is not that. It seems to be people are talking about what the other side is saying. So if you're on the right, then you're very conscious of the times when, well, the left wouldn't let us even say, could COVID be from a lab?

They would, you literally would get shut down for saying that, and it turns out, well, we don't know if it's true, but there's at least a real likelihood that it is, and it certainly is something that should have been talked about. So I tend to stay away from any such discussions.

And the reason is twofold. One is because they're almost entirely partisan. It generally is each side thinks what the other side is saying is misinformation or disinformation, and they can prove certain examples. So we're not gonna get anywhere on that. We certainly are never gonna get 60 votes in the Senate for anything about that.

I don't think content moderation is nearly as important as people think. It has to be done, and it can be improved. Almost all the action is in the dynamics, the architecture, the virality, and then the nature of who is on the platform, unverified people, and how much amplification they get.

That's what we should be looking at, rather than wasting so much of our breath on whether we're gonna do a little more or a little less content moderation. - So the true harm to society on average and over the long term is in the dynamics, is fundamentally in the dynamics of social media, not in the subtle choices of content moderation, aka censorship.

- Exactly. There've always been conspiracy theories. You know, "The Turner Diaries" is this book written in 1978. It introduced the replacement theory to a lot of people. Timothy McVeigh had it on him when he was captured in 1995 after the Oklahoma City bombing. It's a kind of a Bible of that fringe, violent, racist, white supremacist group.

So the killer in "Buffalo" was well acquainted with these ideas. They've been around, but this guy's from a small town. I forget where he's from. But he was, as he says in a manifesto, he was entirely influenced by things he found online. He was not influenced by anyone he met in person.

Ideas spread and communities can form, these like micro communities can form with bizarre and twisted beliefs. And this is, again, back to the "Atlantic" article. I've got this amazing quote from Martin Goury. I mean, just find it. But Martin Goury, he was a former CIA analyst, wrote this brilliant book called "The Revolt of the Public" or, and he has this great quote.

He says, he talks about how in the age of mass media, we were all in a sense looking at a mirror, looking back at us. And it might've been a distorted mirror, but we had stories in common, we had facts in common. It was mass media. And he describes how the flood of information with the internet is like a tsunami washing over us.

It has all kinds of effects. And he says, this isn't a comment in an interview on Vox. He says, "The digital revolution has shattered that mirror. And now the public inhabits those broken pieces of glass. So the public isn't one thing. It's highly fragmented and it's basically mutually hostile.

It's mostly people yelling at each other and living in bubbles of one sort or another." And so, we now see clearly there's this little bubble of just bizarre nastiness in which the killer in Christchurch and the killer in Norway and now in Buffalo, they're all put into a community and posts flow up within that community by a certain dynamic.

So we can never stamp those words or ideas out. The question is not, can we stop them from existing? The question is, what platforms, what are the platforms by which they spread all over the world and into every little town so that the 1% of whatever, whatever percentage of young men are vulnerable to this, that they get exposed to it.

It's in the dynamics and the architecture. - It's a fascinating point to think about 'cause we often debate and think about the content moderation, the censorship, the ideas of free speech, but you're saying, yes, that's important to talk about, but much more important is fixing the dynamics. - That's right, 'cause everyone thinks if there's regulation, it means censorship.

At least people on the right think regulation equals censorship. And I'm trying to say, no, no, that's only if all we talk about is content moderation. Well, then yes, that is the framework. You know, how much or how little do we, you know, but I don't even wanna talk about that 'cause all the action is in the dynamics.

That's the point of my article. It's the architecture changed and our social world went insane. - So can you try to steel man the other side? So the people that might say that social media is good for society overall, both in the dimension of mental health, as Mark said, for teenagers, teenage girls, and for our democracy.

Yes, there's a lot of negative things, but that's slices of data. If you look at the whole, which is difficult to measure, it's actually good for society. And to the degree that it's not good, it's getting better and better. Is it possible to steel man their point? - Yeah.

It's hard, but I should be able to do it. I need to put my money where my mouth is, and that's a good question. So on the mental health front, you know, the argument is usually what they say is, well, you know, for communities that are cut off, especially LGBTQ kids, they can find each other.

So it connects kids, especially kids who wouldn't find connection otherwise. It exposes you to a range of ideas and content, and it's fun. - Is there, in the studies you looked at, is there inklings of data that's maybe early data that shows that there is positive effects in terms of self-report data, or how would you measure behavioral positive?

It's difficult. - Right, so if you look at how do you feel when you're on the platform, you get a mix of positive and negative, and people say they feel supported. And this is what Mark was referring to when he said, you know, there was like 18 criteria, and on most it was positive, and on some it was negative.

So if you look at how do you feel while you're using the platform, look, most kids enjoy it, they're having fun, but some kids are feeling inferior, cut off, bullied. So if we're saying what's the average experience on the platform, that might actually be positive. If we just measured the hedonics, like how much fun versus fear is there, it could well be positive.

But what I'm trying to, okay, so is that enough steel manning? Can I? - That's pretty good. - Okay. - You held your breath. - Yeah. But what I'm trying to point out is this isn't a dose response sugar thing, like how do you feel while you're consuming heroin?

Like while I'm consuming heroin, I feel great, but am I glad that heroin came into my life? Am I glad that everyone in my seventh grade class is on heroin? Like, no, I'm not. Like I wish that people weren't on heroin, and they could play on the playground, but instead they're just, you know, sitting on the bench shooting up during recess.

So when you look at it as an emergent phenomenon that changed childhood, now it doesn't matter what are the feelings while you're actually using it. We need to zoom out and say, how has this changed childhood? - Can you try to do the same for democracy? - Yeah. So we can go back to, you know, what Mark said in 2012 when he was taking Facebook public, and you know, this is the wake of the Arab Spring.

I think people really have to remember what an extraordinary year 2011 was. It starts with the Arab Spring. Dictators are pulled down. Now people say, you know, Facebook took them down. I mean, of course it was the citizen, the people themselves took down dictators, aided by Facebook and Twitter and, I don't know if it was, or texting, there were some other platforms they used.

So the argument that Mark makes in this letter to potential shareholders, investors, is, you know, we're at a turning point in history, and, you know, social media is rewiring. We're giving people the tools to rewire their institutions. So this all sounds great. Like this is the democratic dream. And what I write about in the essay is the period of techno-democratic optimism, which began in the early '90s with the fall of the Iron Curtain and the Soviet Union.

And then the internet comes in and, you know, people, I mean, people my age remember how extraordinary it was, how much fun it was. I mean, the sense that this was the dawning of a new age, and there was so much optimism. And so this optimism runs all the way from the early '90s, all the way through 2011 with the Arab Spring.

And of course that year ends with Occupy Wall Street. And there were also big protest movements in Israel, in Spain, and in a lot of areas. Martin Goury talks about this. So there certainly was a case to be made that Facebook in particular, but all these platforms, these were God's gift to democracy.

What dictator could possibly keep out the internet? What dictator could stand up to people connected on these digital media platforms? So that's the strong case that this is gonna be good for democracy. And then we can see what happened in the years after. Now, first of all, so in Mark's response to you, so here, let me read from what he said when you interviewed him.

He says, "I think it's worth grounding this conversation "in the actual research that has been done on this, "which by and large finds that social media "is not a large driver of polarization." He says that. Then he says, "Most academic studies that I've seen "actually show that social media use "is correlated with lower polarization." That's a factual claim that he makes, which is not true.

But he asserts that, well, actually, wait, it's tricky because he says the studies he has seen. So I can't, so it might be that the studies he has seen say that, but if you go to the Google Doc with Chris Bale, you see there's seven different questions that can be addressed.

And on one of them, which is filter bubbles, the evidence is very mixed. And he might be right that Facebook overall doesn't contribute to filter bubbles. But on the other six, the evidence is pretty strongly on the yes side, it is a cause. - He also draws a line between the United States versus the rest of the world.

- Right, and there's one thing true about that, which is that polarization has been rising much faster in the US than in any other major country. So he's right about that. So we're talking about an article by Matthew Genscow and a few other researchers. A very important article, we've got it in the political dysfunction database.

- And we should say that in this study, there's, like I started to say, there's a lot of fascinating questions that it's organized by, whether studies indicate yes or no, question one is does social media make people more angry or effectively polarized? Question two is does social media create echo chambers?

These are fascinating, really important questions. Question three is does social media amplify posts that are more emotional, inflammatory, or false? Question four is does social media increase the probability of violence? Question five is does social media enable foreign governments to increase political dysfunction in the US and other democracies?

Question six, does social media decrease trust? Seven, is the social media strengthen populist movements? And then there's other sections, as you mentioned. - Yeah, that's right. But once you operationalize it as seven different questions, so one is about polarization and there are measures of that, the degree to which people say they hate the other side.

And so in this study by Boxell, Genscow, and Shapiro, 2021, they looked at all the measures of polarization they could find going back to the 1970s for about 20 different countries. And they show plots, you have these nice plots with red lines showing that in some countries it's going up, like the United States especially, in some countries it's going down, and in some countries it's pretty flat.

And so Marx says, well, you know, if polarization's going up a lot in the US but not in most other countries, well, maybe Facebook isn't responsible. But so much depends on how you operationalize things. Are we interested in the straight line, regression line, going back to the '70s? And if so, well, then he's right in what he says.

But that's not the argument. The argument isn't that, you know, it's been rising and falling since the '70s. The argument is it's been rising and falling since 2012 or so. And for that, now I just spoke with, I've been emailing with the authors of the study and they say there's not really enough data to do it statistically reliably 'cause there's only a few observations after 2012.

But if you look at the graphs in their study, and they actually do provide, as they pointed out to me, they do provide a statistical test if you break the data at the year 2000. So actually, a polarization is going up pretty widely if you just look after 2000, which is when the internet would be influential.

And if you look just after 2012, you have to just do it by eye. But if you do it on their graphs by eye, you see that actually a number of countries do see a sudden sharp upturn. Not all, not all by any means. But my point is, Mark asserts, he points to one study and he points to this over and over again.

I have had two conversations with him. He pointed to this study both times. He asserts that this study shows that polarization is up some places, down other places. There's no association. But actually, we have another section in the Google Doc where we review all the data on the decline of democracy.

And the high point of democracy, of course, it was rising in the '90s. But if you look around the world, by some measures it begins to drop in the late 2000s, around 2007, 2008. By others, it's in the early to mid 2010s. The point is, there is a, by many measures, there's a drop in the quality and the number of democracies on this planet that began in the 2010s.

And so, yes, Mark can point to one study. But if you look in the Google Doc, there are a lot of other studies that point the other way. And especially about whether things are getting more polarized or less, more polarized. Not in all countries, but in a lot. - So you've provided the problem, several proposals for solutions.

Do you think Mark, do you think Elon or whoever is at the head of Twitter would be able to implement these changes? Or does there need to be a competitor, social network to step up? If you were to predict the future, now this is you giving sort of financial advice to me.

No. - No, not that I can't do. - Definitely not financial advice. - I can give you advice. Do the opposite of whatever I've done. - Okay, excellent. (laughs) But what do you think, when we talk again in 10 years, what do you think we'd be looking at if it's a better world?

- Yeah, so you have to look at each, the dynamics of each change that needs to be made. And you have to look at it systemically. And so the biggest change for teen mental health, I think, is to raise the age from 13, it was set to 13 in COPPA in like 1997 or six or whatever, that was eight, whatever it was.

It was set to 13 with no enforcement. I think it needs to go to 16 or 18 with enforcement. Now, there's no way that Facebook can say, actually, so look, Instagram, the age is 13, but they don't enforce it. And they're under pressure to not enforce it because if they did enforce it, then all the kids would just go to TikTok, which they're doing anyway.

But if we go back a couple of years, when they were talking about rolling out Facebook for kids, 'cause they need to get those kids, they need to get kids under 13. There's a business imperative to hook them early and keep them. So I don't expect Facebook to act on its own accord and do the right thing because then they-- - So regulation is the only way.

- Exactly. When you have a social dilemma, like what economists call like a prisoner's dilemma or a social dilemma is generalized to multiple people. And when you have a social dilemma, each player can't opt out because they're gonna lose. You have to have central regulation. So I think we have to raise the age.

The UK parliament is way ahead of us. I think they're actually functional. The US Congress is not functional. So the parliament is implementing the age appropriate design code that may put pressure on the platforms globally to change certain. So anyway, my point is, we have to have regulation to force them to be transparent and share what they're doing.

There are some good bills out there. So I think that if the companies and the users, if we're all stuck in a social dilemma in which the incentives against doing the right thing are strong, we do need regulation on certain matters. And again, it's not about content moderation, who gets to say what, but it's things like the Platform Accountability and Transparency Act, which is from Stanardoz Kunz, Portman and Klobuchar.

This would force the platforms to just share information on what they're doing. Like we can't even study what's happening without the information. So that I think is just common sense. Senator Michael Bennett introduced the Digital Platforms Commission Act of 2022, which would create a body tasked with actually regulating and having oversight.

Right now, the US government doesn't have a body. I mean, the FTC can do certain things. We have things about antitrust, but we don't have a body that can oversee or understand these things that are transforming everything and possibly severely damaging our political life. So I think there's a lot of, oh, and then the, the state of California is actually currently considering a version of the UK's, the age appropriate design code, which would force the companies to do some simple things like not be sending alerts and notifications to children at 10 or 11 o'clock at night, just things like that to make platforms just less damaging.

So I think there's an essential role for regulation. And I think if the US Congress is too paralyzed by politics, if the UK and the EU and the state of California and the state, a few other states, if they enact legislation, the platforms don't wanna have different versions in different states or countries.

So I think there actually is some hope, even if the US Congress is dysfunctional. - So there is, 'cause I've been interacting with certain regulations hitting, designed to hit Amazon, but it's hitting YouTube. YouTube folks have been talking to me, which is recommender systems. The algorithm has to be public, I think, versus private, which completely breaks.

It's way too clumsy a regulation that where the unintended consequences break recommender systems, not for Amazon, but for other platforms. That's just to say that government can sometimes be clumsy with the regulation. - Usually is. - And so my preference is the threat of regulation in a friendly way encourages-- - Good behavior.

- You really shouldn't need it. You really shouldn't need it. My preference is great leaders lead the way in doing the right thing. And I actually, honestly, this, to our earlier kind of maybe my naive disagreement that I think it's good business to do the right thing in these spaces.

So creating a problem-- - Sometimes it is. Sometimes it loses you, most of your users. - Well, I think it's important because I've been thinking a lot about World War III recently. - Yeah. - And it might be silly to say, but I think social media has a role in either creating World War III or avoiding World War III.

It seems like so much of wars throughout history have been started through very fast escalation. And it feels like just looking at our recent history, social media is the mechanism for escalation. And so it's really important to get this right, not just for the mental health of young people, not just for the polarization of bickering over small-scale political issues, but literally the survival of human civilization.

So there's a lot at stake here. - Yeah, I certainly agree with that. I would just say that I'm less concerned about World War III than I am about Civil War II. I think that's a more likely prospect. - Yeah, yeah, yeah. Can I ask for your wise sage advice to young people?

So advice number one is put down the phone. Don't use Instagram and social media. But to young people in high school and college, how to have a career, or how to have a life they can be proud of. - Yeah, I'd be happy to, 'cause I teach a course at NYU in the business school called Work, Wisdom, and Happiness.

And the course is, it's advice on how to have a happy, a successful career as a human being. But the course has evolved that it's now about three things, how to get stronger, smarter, and more sociable. If you can do those three things, then you will be more successful at work and in love and friendships.

And if you are more successful in work, love, and friendships, then you will be happier. You will be as happy as you can be, in fact. So the question is, how do you become smarter, stronger, and happier? And the answer to all three is, it's a number of things, but it's you have to see yourself as this complex adaptive system.

You've got this complicated mind that needs a lot of experience to wire itself up. And the most important part of that experience is that you don't grow when you are with your attachment figure. You don't grow when you're safe. You have an attachment figure to make you feel confident to go out and explore the world.

In that world, you will face threats, you will face fear, and sometimes you'll come running back. But you have to keep doing it because over time, you then develop the strength to stay out there and to conquer it. That's normal human childhood. That's what we blocked in the 1990s in this country.

So young people have to get themselves the childhood, and this is all the way through adolescence and young adulthood, they have to get themselves the experience that older generations are blocking them from out of fear and that their phones are blocking them from out of just hijacking almost all the inputs into their life and almost all the minutes of their day.

So go out there, put yourself out in experiences. You are anti-fragile and you're not gonna get strong unless you actually have setbacks and criticisms and fights. So that's how you get stronger. And then there's an analogy in how you get smarter, which is you have to expose yourself to other ideas, to ideas that people that criticize you, people that disagree with you.

And this is why I co-founded Heterodox Academy because we believe that faculty need to be in communities that have political diversity and viewpoint diversity, but so do students. And it turns out students want this. The surveys show very clearly Gen Z has not turned against viewpoint diversity. Most of them want it, but they're just afraid of the small number that will sort of shoot darts at them if they say something wrong.

So anyway, the point is you're anti-fragile and so you have to realize that to get stronger, you have to realize to get smarter. And then the key to becoming more sociable is very simple. It's just always looking at it through the other person's point of view. Don't be so focused on what you want and what you're afraid of.

Put yourself in the other person's shoes, what's interesting to them, what do they want? And if you develop the skill of looking at it from their point of view, you'll be a better conversation partner, you'll be a better life partner. So there's a lot that you can do. I mean, I could say, go read "The Coddling of the American Mind." I could say, go read Dale Carnegie, "How to Win Friends and Influence People." But take charge of your life and your development 'cause if you don't do it, then the older protective generation and your phone are gonna take charge of you.

- So on anti-fragility and coddling the American mind, if I may read just a few lines from Chief Justice John Roberts, which I find is really beautiful. So it's not just about viewpoint diversity, but it's real struggle, absurd, unfair struggle that seems to be formative to the human mind.

He says, "From time to time in the years to come, I hope you will be treated unfairly so that you will come to know the value of justice. I hope that you will suffer betrayal because that will teach you the importance of loyalty. Sorry to say, but I hope you will be lonely from time to time so that you don't take friends for granted.

I wish you bad luck again from time to time so that you will be conscious of the role of chance in life and understand that your success is not completely deserved and that the failure of others is not completely deserved either. And when you lose, as you will from time to time, I hope every now and then, your opponent will gloat over your failure.

It is a way for you to understand the importance of sportsmanship. I hope you'll be ignored so you know the importance of listening to others. And I hope you will have just enough pain to learn compassion. Whether I wish these things or not, they're going to happen. And whether you benefit from them or not will depend upon your ability to see the message in your misfortunes.

He read that in a middle school graduation. - Yes, for his son's middle school graduation. That's what I was trying to say, only that's much more beautiful. - Yeah, and I think your work is really important and it is beautiful and it's bold and fearless and it's a huge honor that you would sit with me.

I'm a big fan. Thank you for spending your valuable time with me today, John, thank you so much. - Thanks so much, Lex, what a pleasure. - Thanks for listening to this conversation with Jonathan Haidt. To support this podcast, please check out our sponsors in the description. And now, let me leave you with some words from Carl Jung.

"Everything that irritates us about others "can lead us to an understanding of ourselves." Thank you for listening and hope to see you next time. (upbeat music) (upbeat music)