there's a broader question here, right? As we build socially and emotionally intelligent machines, what does that mean about our relationship with them? And then more broadly, our relationship with one another, right? Because this machine is gonna be programmed to be amazing at empathy by definition, right? It's gonna always be there for you.
It's not gonna get bored. I don't know how I feel about that. I think about that a lot. - The following is a conversation with Rana El-Khlyoubi, a pioneer in the field of emotion recognition and human-centric artificial intelligence. She is the founder of Effectiva, deputy CEO of SmartEye, author of "Girl Decoded," and one of the most brilliant, kind, inspiring, and fun human beings I've gotten the chance to talk to.
This is the Lex Friedman Podcast. To support it, please check out our sponsors in the description. And now, dear friends, here's Rana El-Khlyoubi. You grew up in the Middle East, in Egypt. What is a memory from that time that makes you smile? Or maybe a memory that stands out as helping your mind take shape and helping you define yourself in this world?
- So the memory that stands out is we used to live in my grandma's house. She used to have these mango trees in her garden and in the summer, and so mango season was like July and August. And so in the summer, she would invite all my aunts and uncles and cousins.
And it was just like maybe there were like 20 or 30 people in the house and she would cook all this amazing food. And us, the kids, we would go down the garden and we would pick all these mangoes. And I don't know, I think it's just the bringing people together, like that always stuck with me, the warmth.
- Around the mango tree. - Yeah, around the mango tree. And there's just like the joy, the joy of being together around food. And I'm a terrible cook, so I guess that didn't, that memory didn't translate to me kind of doing the same. I love hosting people. - Do you remember colors, smells?
Is that what, like what, how does memory work? - Yeah. - Like what do you visualize? Do you visualize people's faces, smiles? Do you, is there colors? Is there like a theme to the colors? Is it smells because of food involved? - Yeah, I think that's a great question.
So those Egyptian mangoes, there's a particular type that I love and it's called Darwesi mangoes. And they're kind of, you know, they're oval and they have a little red in them. So I kind of, they're red and mango colored on the outside. So I remember that. - Does red indicate like extra sweetness?
Is that? - Yes. - That means like it's nicely-- - It's like really sweet. - Yeah, it's nice and ripe and stuff, yeah. What's like a definitive food of Egypt? You know, there's like these almost stereotypical foods in different parts of the world, like Ukraine invented borscht. Borscht is this beet soup with, that you put sour cream on.
See, it's not, I can't see-- - Okay, okay, well you explained it that way. - If you know what it is, I think you know it's delicious. But if I explain it, it's just not gonna sound delicious. I feel like beet soup, this doesn't make any sense. But that's kind of, and you probably have actually seen pictures of it 'cause it's one of the traditional foods in Ukraine, in Russia, in different parts of the Slavic world.
So, but it's become so cliche and stereotypical that you almost don't mention it, but it's still delicious. Like I visited Ukraine, I eat that every single day. - Do you make it yourself? How hard is it to make? - No, I don't know. I think to make it well, like anything, like Italians, they say, well, tomato sauce is easy to make, but to make it right, that's like a generational skill.
So anyway, is there something like that in Egypt? Is there a culture of food? - There is, and actually we have a similar kind of soup. It's called molokhia, and it's made of this green plant. It's like somewhere between spinach and kale, and you mince it, and then you cook it in like chicken broth.
And my grandma used to make, and my mom makes it really well, and I try to make it, but it's not as great. So we used to have that, and then we used to have it alongside stuffed pigeons. I'm pescetarian now, so I don't eat that anymore, but. - Stuffed pigeons.
- Yeah, it's like, it was really yummy. It's the one thing I miss about, you know, now that I'm pescetarian and I don't eat. - The stuffed pigeons? - Yeah, the stuffed pigeons. (both laughing) - Is it, what are they stuffed with? If that doesn't bother you too much to describe.
- No, no, it's stuffed with a lot of like just rice and. - Oh, got it, got it, got it. - Yeah, it's just rice, yeah, so. - And you also, you've said that you're first in your book that your first computer was an Atari, and Space Invaders was your favorite game.
Is that when you first fell in love with computers? Would you say? - Yeah, I would say so. - Video games or just the computer itself? Just something about the machine. Ooh, this thing. There's magic in here. - Yeah, I think the magical moment is definitely like playing video games with my, I have two younger sisters, and we just like had fun together, like playing games.
But the other memory I have is my first code. The first code I wrote, I wrote, I drew a Christmas tree. And I'm Muslim, right? So it's kind of, it was kind of funny that the first thing I did was like this Christmas tree. So yeah, and that's when I realized, wow, you can write code to do all sorts of like really cool stuff.
I must have been like six or seven at the time. So you can write programs and the programs do stuff for you. That's power. That's, if you think about it, that's empowering. - It's AI. - Yeah, I know, well, it is. I don't know what that, you see, like, I don't know if many people think of it that way when they first learned to program.
They just love the puzzle of it. Like, ooh, this is cool, this is pretty. It's a Christmas tree, but like, it's power. - It is power. - Eventually, I guess you couldn't at the time, but eventually this thing, if it's interesting enough, if it's a pretty enough Christmas tree, it can be run by millions of people and bring them joy, like that little thing.
And then because it's digital, it's easy to spread. So like you just created something that's easily spreadable to millions of people. - Totally. - It's hard to think that way when you're six. In the book you write, "I am who I am because I was raised by a particular set of parents, both modern and conservative, forward-thinking yet locked in tradition.
I'm a Muslim and I feel I'm stronger, more centered for it. I adhere to the values of my religion, even if I'm not as dutiful as I once was. And I am a new American and I'm thriving on the energy, vitality, and entrepreneurial spirit of this great country." So let me ask you about your parents.
What have you learned about life from them, especially when you were young? - So both my parents, they're Egyptian, but they moved to Kuwait right after. They actually, there's a cute story about how they met. So my dad taught kobol in the '70s. - Nice. - And my mom decided to learn programming.
So she signed up to take his kobol programming class. And he tried to date her and she was like, "No, no, no, I don't date." And so he's like, "Okay, I'll propose." And that's how they got married. - Whoa, strong move. - Right, exactly right. - That's really impressive.
Those kobol guys know how to impress a lady. So yeah, so what have you learned from them? - So definitely grit. One of the core values in our family is just hard work. There were no slackers in our family. And that's something that's definitely stayed with me, both as a professional, but also in my personal life.
But I also think my mom, my mom always used to, I don't know, it was like unconditional love. I just knew my parents would be there for me, kind of regardless of what I chose to do. And I think that's very powerful. And they got tested on it because I kind of challenged, you know, I challenged cultural norms and I kind of took a different path, I guess, than what's expected of a woman in the Middle East.
And they still love me, which is, I'm so grateful for that. - When was like a moment that was the most challenging for them? Which moment were they kind of, they had to come face to face with the fact that you're a bit of a rebel? - I think the first big moment was when I, I had just gotten married, but I decided to go do my PhD at Cambridge University.
And because my husband at the time, he's now my ex, ran a company in Cairo, he was gonna stay in Egypt. So it was gonna be a long distance relationship. And that's very unusual in the Middle East for a woman to just head out and kind of pursue her career.
And so my dad and my parents-in-law both said, "You know, we do not approve of you doing this, but now you're under the jurisdiction of your husband, so he can make the call." And luckily for me, he was supportive. He said, "You know, this is your dream come true.
You've always wanted to do a PhD. I'm gonna support you." So I think that was the first time where, you know, I challenged the cultural norms. - Was that scary? - Oh my God, yes. It was totally scary. - What's the biggest culture shock from there to Cambridge, to London?
- Well, that was also during, right around September 11th. So everyone thought that there was gonna be a third world war, where it was really, and at the time I used to wear the hijab, so I was very visibly Muslim. And so my parents just were, they were afraid for my safety.
But anyways, when I got to Cambridge, because I was so scared, I decided to take off my headscarf and wear a hat instead. So I just went to class wearing these like British hats, which was, in my opinion, actually worse than just showing up in a headscarf, 'cause it was just so awkward, right?
Like sitting in class with like all these- - Trying to fit in, like a spy. - Yeah, yeah, yeah. So after a few weeks of doing that, I was like, to heck with that, I'm just gonna go back to wearing my headscarf. - Yeah, you wore the hijab, so starting in 2000, and for 12 years after.
So always, whenever you're in public, you have to wear the head covering. Can you speak to that, to the hijab, maybe your mixed feelings about it? Like what does it represent in its best case? What does it represent in the worst case? - Yeah, you know, I think there's a lot of, I guess I'll first start by saying I wore it voluntarily.
I was not forced to wear it. And in fact, I was one of the very first women in my family to decide to put on the hijab. And my family thought it was really odd, right? Like they were like, why do you wanna put this on? And at its best, it's a sign of modesty, humility.
- It's like me wearing a suit. People are like, why are you wearing a suit? It's a step back into some kind of tradition, a respect for tradition of sorts. So you said because it's by choice, you're kind of free to make that choice to celebrate a tradition of modesty.
- Exactly, and I actually like made it my own. I remember I would really match the color of my headscarf with what I was wearing. Like it was a form of self-expression, and at its best, I loved wearing it. You know, I have a lot of questions around how we practice religion and religion.
And I think also it was a time where I was spending a lot of time going back and forth between the US and Egypt. And I started meeting a lot of people in the US who were just amazing people, very purpose-driven, people who have very strong core values, but they're not Muslim.
That's okay, right? And so that was when I just had a lot of questions. And politically also the situation in Egypt was when the Muslim Brotherhood ran the country, and I didn't agree with their ideology. It was at a time when I was going through a divorce. Like it was like just the perfect storm of like political, personal conditions, where I was like, "This doesn't feel like me anymore." And it took a lot of courage to take it off because culturally it's okay if you don't wear it, but it's really not okay to wear it and then take it off.
- But you're still, so you had to do that while still maintaining a deep core and pride in the origins, in your origin story. - Totally. - So still being Egyptian, still being a Muslim. - Right, and being, I think, generally like faith-driven, but yeah. - But what that means changes year by year for you.
It's like a personal journey. - Yeah, exactly. - What would you say is the role of faith in that part of the world? Like how do you see it? You mention it a bit in the book too. - Yeah, I mean, I think there is something really powerful about just believing that there's a bigger force.
You know, there's a kind of surrendering, I guess, that comes with religion and you surrender and you have this deep conviction that it's gonna be okay, right? Like the universe is out to like do amazing things for you and it's gonna be okay. And there's strength to that. Like even when you're going through adversity, you just know that it's gonna work out.
- Yeah, it gives you like an inner peace, a calmness. - Exactly, exactly. - Yeah, it's faith in all the meanings of that word. - Right. - Faith that everything is going to be okay. And it is because time passes and time cures all things. It's like a calmness with the chaos of the world.
- And also there's like a silver lining. I'm a true believer of this, that something at the specific moment in time can look like it's catastrophic and it's not what you wanted in life, da-da-da-da. But then time passes and then you look back and there's a silver lining, right?
It maybe closed the door, but it opened a new door for you. And so I'm a true believer in that, that there's a silver lining in almost anything in life. You just have to have this like, have faith or conviction that it's gonna work out. - Such a beautiful way to see a shitty feeling.
So if you feel shitty about a current situation, I mean, it almost is always true, unless it's the cliches thing of, if it doesn't kill you, whatever doesn't kill you makes you stronger. It does seem that over time, when you take a perspective on things, the hardest moments and periods of your life are the most meaningful.
- Yeah, yeah. So over time you get to have that perspective. - Right. - What about, 'cause you mentioned Kuwait, what about, let me ask you about war. What's the role of war and peace, maybe even the big love and hate in that part of the world, because it does seem to be a part of the world where there's turmoil.
There was turmoil, there's still turmoil. - It is so unfortunate, honestly. It's such a waste of human resources and yeah, and human mindshare. I mean, at the end of the day, we all kind of want the same things. We want human connection, we want joy, we wanna feel fulfilled, we wanna feel a life of purpose.
And I just find it baffling, honestly, that we are still having to grapple with that. I have a story to share about this. I grew up, I'm Egyptian, American now, but originally from Egypt. And when I first got to Cambridge, it turned out my office mate, like my PhD kind of, we ended up becoming friends, but she was from Israel.
And we didn't know, yeah, we didn't know how it was gonna be like. - Did you guys sit there just staring at each other for a bit? - Actually, she, 'cause I arrived before she did and it turns out she emailed our PhD advisor and asked him if she thought it was gonna be okay.
- Yeah. Oh, this is around 9/11 too. - Yeah, and Peter Robinson, our PhD advisor was like, yeah, like this is an academic institution, just show up. And we became super good friends. We were both new moms. Like we both had our kids during our PhD. We were both doing artificial emotional intelligence.
She was looking at speech. I was looking at the face. We just had so, the culture was so similar. Our jokes were similar. It was just, I was like, why on earth are our countries, why is there all this like war and tension? And I think it falls back to the narrative, right?
If you change the narrative, like whoever creates this narrative of war, I don't know. We should have women run the world. - Yeah, that's one solution, the good women, because there's also evil women in the world. - True, okay. (both laughing) - But yes, yes, there could be less war if women ran the world.
The other aspect is, it doesn't matter the gender, the people in power. I get to see this with Ukraine and Russia, different parts of the world around that conflict now. And that's happening in Yemen as well and everywhere else. There's these narratives told by the leaders to the populace and those narratives take hold and everybody believes that and they have a distorted view of the humanity on the other side.
In fact, especially during war, you don't even see the people on the other side as human or as equal intelligence or worth or value as you. You tell all kinds of narratives about them being Nazis or Dom or whatever narrative you wanna weave around that. Or evil. But I think when you actually meet them face to face, you realize they're the same.
- Exactly right. - It's actually a big shock for people to realize that they've been essentially lied to within their country. And I kind of have faith that social media, as ridiculous as it is to say, or any kind of technology is able to bypass the walls that governments put up and connect people directly and then you get to realize, ooh, people fall in love across different nations and religions and so on.
And that I think ultimately can cure a lot of our ills, especially sort of in person. I also think that if leaders met in person to have a conversation, that would cure a lot of the ills of the world, especially in private. - Let me ask you about the women running the world.
- Okay. - So gender does in part, perhaps shape the landscape of just our human experience. So in what ways was it limiting it? In what ways was it empowering for you to be a woman in the Middle East? - I think just kind of just going back to like my comment on like women running the world, I think it comes back to empathy, right?
Which has been a common thread throughout my entire career. And it's this idea of human connection. Once you build common ground with a person or a group of people, you build trust, you build loyalty, you build friendship, and then you can turn that into like behavior change and motivation and persuasion.
So it's like empathy and emotions are just at the center of everything we do. And I think being from the Middle East, kind of this human connection is very strong. Like we have this running joke that if you come to Egypt for a visit, people are gonna, will know everything about your life like right away, right?
I have no problems asking you about your personal life. There's no like no boundaries really, no personal boundaries in terms of getting to know people. We get emotionally intimate like very, very quickly. But I think people just get to know each other like authentically, I guess. There isn't this like superficial level of getting to know people.
You just try to get to know people really deeply. - And empathy is a part of that. - Totally, 'cause you can put yourself in this person's shoe and kind of, yeah, imagine what challenges they're going through. So I think I've definitely taken that with me. Generosity is another one too, like just being generous with your time and love and attention and even with your wealth, right?
Even if you don't have a lot of it, you're still very generous. And I think that's another. - Enjoying the humanity of other people. And so do you think there's a useful difference between men and women in that aspect and empathy? Or is doing these kind of big general groups, does that hinder progress?
- Yeah, I actually don't wanna overgeneralize. I mean, some of the men I know are like the most empathetic humans. - Yeah, I strive to be empathetic. - Yeah, you're actually very empathetic. Yeah, so I don't wanna overgeneralize. Although one of the researchers I worked with when I was at Cambridge, Professor Simon Baring-Cohen, he's Sasha Baring-Cohen's cousin.
- Yeah. (laughs) - But he runs the Autism Research Center at Cambridge and he's written multiple books on autism. And one of his theories is the empathy scale, like the systemizers and the empathizers. And there's a disproportionate amount of computer scientists and engineers who are systemizers and perhaps not great empathizers.
And then there's more men in that bucket, I guess, than women. And then there's more women in the empathizers bucket. So again, not to overgeneralize. - I sometimes wonder about that. It's been frustrating to me how many, I guess, systemizers there are in the field of robotics. - Yeah.
- It's actually encouraging to me 'cause I care about, obviously, social robotics. And because there's more opportunity for people that are empathic. (laughs) - Exactly, I totally agree. Well, right? - So it's nice. - Yes. - I mean, most robotics I talk to, they don't see the human as interesting, as like it's not exciting.
You wanna avoid the human at all costs. It's a safety concern to be touching the human, which it is, but it's also an opportunity for deep connection or collaboration or all that kind of stuff. So, and because most brilliant roboticists don't care about the human, it's an opportunity. - Right.
- For, in your case, it's a business opportunity too, but in general, an opportunity to explore those ideas. So, in this beautiful journey to Cambridge, to UK, and then to America, what's the moment or moments that were most transformational for you as a scientist and as a leader? So you became an exceptionally successful CEO, founder, researcher, scientist, and so on.
Was there a phase shift there where like, I can be somebody, I can really do something in this world? - Yeah, so actually just kind of a little bit of background. So the reason why I moved from Cairo to Cambridge, UK to do my PhD is because I had a very clear career plan.
I was like, okay, I'll go abroad, get my PhD, gonna crush it in three or four years, come back to Egypt and teach. It was very clear, very well laid out. - Was topic clear or no? - Well, I did my PhD around building artificial emotional intelligence and looking at-- - No, but in your master plan ahead of time, when you're sitting by the mango tree, did you know it's gonna be artificial intelligence?
- No, no, no, that I did not know. Although I think I kinda knew that I was gonna be doing computer science, but I didn't know the specific area. But I love teaching, I mean, I still love teaching. So I just, yeah, I just wanted to go abroad, get a PhD, come back, teach.
- Why computer science? Can we just linger on that? 'Cause you're such an empathic person who cares about emotion, humans and so on. Aren't computers cold and emotionless? (laughing) Just-- - We're changing that. - Yeah, I know, but isn't that the, or did you see computers as having the capability to actually connect with humans?
- I think that was my takeaway from my experience just growing up. Computers sit at the center of how we connect and communicate with one another, right? Or technology in general. Like I remember my first experience being away from my parents. We communicated with a fax machine, but thank goodness for the fax machine because we could send letters back and forth to each other.
This was pre-emails and stuff. So I think there's, I think technology can be not just transformative in terms of productivity, et cetera. It actually does change how we connect with one another. - Can I just defend the fax machine? - Yeah. - There's something, like the haptic feel, 'cause the email is all digital.
There's something really nice. I still write letters to people. There's something nice about the haptic aspect of the fax machine 'cause you still have to press, you still have to do something in the physical world to make this thing a reality, the sense of somebody. - Right, and then it comes out as a printout and you can actually touch it and read it.
- Yeah, there's something lost when it's just an email. Obviously, I wonder how we can regain some of that in the digital world, which goes to the metaverse and all those kinds of things. We'll talk about it. Anyway, so-- - Actually, do you have a question on that one?
Do you still, do you have photo albums anymore? Do you still print photos? - No, no, but I'm a minimalist. So it was one of the painful steps in my life was to scan all the photos and let go of them and then let go of all my books.
- You let go of your books? - Yeah, switched to Kindle, everything Kindle. So I thought, okay, think 30 years from now. Nobody's gonna have books anymore. The technology of digital books is gonna get better and better and better. Are you really gonna be the guy that's still romanticizing physical books?
Are you gonna be the old man on the porch who's like kids, yes. So just get used to it 'cause it felt, it still feels a little bit uncomfortable to read on a Kindle, but get used to it. You always, I mean, I'm trying to learn new programming languages always.
Like with technology, you have to kind of challenge yourself to adapt to it. I forced myself to use TikTok now. That thing doesn't need much forcing. It pulls you in like the worst kind of, or the best kind of drug. Anyway, yeah, so yeah, but I do love haptic things.
There's a magic to the haptic. Even like touchscreens, it's tricky to get right, to get the experience of a button. - Yeah. - Anyway, what were we talking about? So AI, so the journey, your whole plan was to come back to Cairo and teach, right? - And then-- - What did the plan go wrong?
- Yeah, exactly, right? And then I got to Cambridge and I fall in love with the idea of research, right? And kind of embarking on a path. Nobody's explored this path before. You're building stuff that nobody's built before and it's challenging and it's hard and there's a lot of non-believers.
I just totally love that. And at the end of my PhD, I think it's the meeting that changed the trajectory of my life. Professor Rosalind Picard, she runs the Affective Computing Group at the MIT Media Lab. I had read her book. You know, I was like following all her research.
- AKA Roz. - Yes, AKA Roz. And she was giving a talk at a pattern recognition conference in Cambridge and she had a couple of hours to kill. So she emailed the lab and she said, you know, if any students wanna meet with me, like just, you know, sign up here.
And so I signed up for slot and I spent like the weeks leading up to it preparing for this meeting. And I want to show her a demo of my research and everything. And we met and we ended up hitting it off. Like we totally clicked. And at the end of the meeting, she said, do you wanna come work with me as a postdoc at MIT?
And this is what I told her. I was like, okay, this would be a dream come true, but there's a husband waiting for me in Cairo. I could have to go back. And she said, it's fine, just commute. And I literally started commuting between Cairo and Boston. Yeah, it was a long commute.
And I did that like every few weeks, I would, you know, hop on a plane and go to Boston. But that changed the trajectory of my life. There was no, I kind of outgrew my dreams, right? I didn't wanna go back to Egypt anymore and be faculty. Like that was no longer my dream.
I had a new dream. - What was it like to be at MIT? What was that culture shock? You mean America in general, but also, I mean Cambridge has its own culture, right? So what was MIT like? What was America like? - I think, I wonder if that's similar to your experience at MIT, I was just, at the Media Lab in particular, I was just really, impressed is not the right word.
I didn't expect the openness to like innovation and the acceptance of taking a risk and failing. Like failure isn't really accepted back in Egypt, right? You don't wanna fail. Like there's a fear of failure, which I think has been hardwired in my brain. But you get to MIT and it's okay to start things.
And if they don't work out, like it's okay, you pivot to another idea. And that kind of thinking was just very new to me. - That's liberating. Well, Media Lab for people who don't know, MIT Media Lab is its own beautiful thing because they, I think more than other places at MIT, reach for big ideas.
And like they try, I mean, I think, I mean, depending of course on who, but certainly with Rosalind, you try wild stuff, you try big things and crazy things. And also try to take things to completion so you can demo them. So always have a demo. Like if you go, one of the sad things to me about robotics labs at MIT, and there's like over 30, I think, is like, usually when you show up to a robotics lab there's not a single working robot, they're all broken.
All the robots are broken, which is like the normal state of things because you're working on them. But it would be nice if we lived in a world where robotics labs had some robots functioning. One of my like favorite moments that just sticks with me, I visited Boston Dynamics and there was a, first of all, seeing so many spots, so many legged robots in one place, I'm like, I'm home.
(both laughing) But the- - My drive. - Yeah. This is where I was built. The cool thing was just to see, there was a random robot spot was walking down the hall. It was probably doing mapping, but it looked like he wasn't doing anything and he was wearing he or she, I don't know.
But it, well, I like, in my mind, there are people that have a backstory, but this one in particular definitely has a backstory because he was wearing a cowboy hat. So I just saw a spot robot with a cowboy hat walking down the hall. And there was just this feeling like there's a life, like he has a life.
He probably has to commute back to his family at night. Like there's a feeling like there's life instilled in this robot and that's magical. I don't know, it was kind of inspiring to see. - Did it say hello to, did he say hello to you? Did he say hello?
- No, it's very, there's a focused nature to the robot. No, no, listen, I love competence and focus and great. Like he was not gonna get distracted by the shallowness of small talk. There's a job to be done and he was doing it. So anyway, the fact that it was working is a beautiful thing.
And I think Media Lab really prides itself on trying to always have a thing that's working that it could show off. - Yes, we used to call it demo or die. You could not, yeah, you could not like show up with like PowerPoint or something. You actually had to have it working.
You know what, my son, who is now 13, I don't know if this is still his lifelong goal or not, but when he was a little younger, his dream is to build an island that's just inhabited by robots, like no humans. He just wants all these robots to be connecting and having fun and so there you go.
- Does he have human, does he have an idea of which robots he loves most? Is it Roomba-like robots? Is it humanoid robots, robot dogs, or is not clear yet? - We used to have a Jibo, which was one of the MIT Media Lab spin-outs and he used to love Jibo.
- The thing with a giant head. - Yes. - That spins. - Right, exactly. - It can rotate and it's an eye. - It has, oh, no, yeah, it can. - Not glowing, like. - Right, right, right, right. Exactly. - It's like HAL 9000, but the friendly version. - (laughs) Right, he loved that.
And then he just loves, yeah, he just, I think he loves all forms of robots, actually. - So embodied intelligence. - Yes. - I like, I personally like legged robots, especially. Anything that can wiggle its butt. No. - And flip. - That's not the definition of what I love, but that's just technically what I've been working on recently.
So I have a bunch of legged robots now in Austin and I've been-- - Oh, that's so cool. - I've been trying to have them communicate affection with their body in different ways, just for art. - That's so cool. - For art, really. 'Cause I love the idea of walking around with the robots, like as you would with a dog.
I think it's inspiring to a lot of people, especially young people. Kids love robots. - Kids love it. - Parents, like adults are scared of robots, but kids don't have this kind of weird construction of the world that's full of evil. They love cool things. - Yeah. I remember when Adam was in first grade, so he must have been like seven or so, I went into his class with a whole bunch of robots and the emotion AI demo and da-da-da.
And I asked the kids, I was like, would you kids want to have a robot friend or a robot companion? Everybody said yes, and they wanted it for all sorts of things, like to help them with their math homework and to be a friend. So it just struck me how there was no fear of robots.
Was a lot of adults have that, like us versus them. - Yeah, none of that. Of course, you wanna be very careful because you still have to look at the lessons of history and how robots can be used by the power centers of the world to abuse your rights and all that kind of stuff.
But mostly it's good to enter anything new with an excitement and an optimism. Speaking of Roz, what have you learned about science and life from Rosalind Picard? - Oh my God, I've learned so many things about life from Roz. I think the thing I learned the most is perseverance.
When I first met Roz, we applied, and she invited me to be her postdoc, we applied for a grant to the National Science Foundation to apply some of our research to autism. And we got back, we were rejected. - Rejected. - Yeah, and the reasoning was-- - The first time you were rejected for fun, yeah.
- Yeah, and I basically, I just took the rejection to mean, okay, we're rejected, it's done, like end of story, right? And Roz was like, it's great news. They love the idea, they just don't think we can do it. So let's build it, show them, and then reapply. (Roz laughs) And it was that, oh my God, that story totally stuck with me.
And she's like that in every aspect of her life. She just does not take no for an answer. - The reframe all negative feedback. - It's a challenge. - It's a challenge. - It's a challenge. - Yes, they like this. - Yeah, yeah, yeah, it was a riot, yeah.
- What else about science in general, about how you see computers, and also business, and just everything about the world? She's a very powerful, brilliant woman like yourself, so is there some aspect of that too? - Yeah, I think Roz is actually also very faith-driven. She has this deep belief and conviction, yeah, in the good in the world and humanity.
I think that was, meeting her and her family was definitely a defining moment for me, because that was when I was like, wow, you can be of a different background and religion, and whatever, and you can still have the same core values. So that was, yeah. I'm grateful to her.
Roz, if you're listening, thank you. - Yeah, she's great. She's been on this podcast before. I hope she'll be on, I'm sure she'll be on again. You were the founder and CEO of Affectiva, which is a big company that was acquired by another big company, SmartEye, and you're now the deputy CEO of SmartEye, so you're a powerful leader, you're brilliant, you're a brilliant scientist.
A lot of people are inspired by you. What advice would you give, especially to young women, but people in general, who dream of becoming powerful leaders like yourself in a world where perhaps, in a world that perhaps doesn't give them a clear, easy path to do so, whether we're talking about Egypt or elsewhere?
- You know, hearing you kind of describe me that way kind of encapsulates, I think, what I think is the biggest challenge of all, which is believing in yourself, right? I have had to like grapple with this, what I call now the Debbie Downer voice in my head. The kind of basically, it's just shattering all the time, it's basically saying, "Oh no, no, no, no, you can't do this.
You're not gonna raise money. You can't start a company. What business do you have, like starting a company or running a company or selling a company? You name it." It's always like, and I think my biggest advice to not just women, but people who are taking a new path and they're not sure, is to not let yourself and let your thoughts be the biggest obstacle in your way.
And I've had to like really work on myself to not be my own biggest obstacle. - So you got that negative voice. - Yeah. - So is that-- - Am I the only one? I don't think I'm the only one. - No, I have that negative voice. I'm not exactly sure if it's a bad thing or a good thing.
I've been really torn about it because it's been a lifelong companion. It's hard to know. It's kind of, it drives productivity and progress but it can hold you back from taking big leaps. I think the best I can say is probably you have to somehow be able to control it.
So turn it off when it's not useful and turn it on when it's useful. Like I have from almost like a third person perspective. - Right, somebody who's sitting there like-- - Yeah, like, because it is useful to be critical. Like after, I just gave a talk yesterday at MIT and I was just, you know, there's so much love and it was such an incredible experience.
So many amazing people I got a chance to talk to. But, you know, afterwards when I went home and just took this long walk, it was mostly just negative thoughts about me. I don't, like one basic stuff, like I don't deserve any of it. And second is like, why did you, that was so dumb.
That you said this, that's so dumb. Like you should have prepared that better. Why did you say this? The, the, the, the, the, the, the. But I think it's good to hear that voice out. All right, and like sit in that. And ultimately I think you grow from that.
Now, when you're making really big decisions about funding or starting a company or taking a leap to go to the UK or take a leap to go to America to work in media lab. Though, yeah, there's a, that's, you should be able to shut that off then because you should have like this weird confidence, almost like faith that you said before that everything's gonna work out.
So take the leap of faith. - Take the leap of faith. Despite all the negativity. I mean, there's, there's, there's some of that. You, you actually tweeted a really nice tweet thread. It says, quote, a year ago, a friend recommended I do daily affirmations. And I was skeptical, but I was going through major transitions in my life.
So I gave it a shot and it set me on a journey of self-acceptance and self-love. So what was that like? Maybe talk through this idea of affirmations and how that helped you. - Yeah, because really like, I'm just like me, I'm a kind, I like to think of myself as a kind person in general, but I'm kind of mean to myself sometimes.
And so I've been doing journaling for almost 10 years now. I use an app called Day One and it's awesome. I just journal and I use it as an opportunity to almost have a conversation with the Debbie Downer voice in my, it's like a rebuttal, right? Like Debbie Downer says, "Oh my God, you won't be able to raise this round of funding." I'm like, "Okay, let's talk about it." I have a track record of doing X, Y, and Z.
I think I can do this. And it's literally like, so I don't know that I can shut off the voice, but I can have a conversation with it. And it just, and I bring data to the table, right? - Nice. - So that was the journaling part, which I found very helpful.
But the affirmation took it to a whole next level and I just love it. I'm a year into doing this. And you literally wake up in the morning and the first thing you do, I meditate first. And then I write my affirmations and it's the energy I want to put out in the world that hopefully will come right back to me.
So I will say, I always start with, "My smile lights up the whole world." And I kid you not, like people in the street will stop me and say, "Oh my God, like we love your smile." - Yeah. - Like, yes. So my affirmations will change depending on what's happening this day.
Is it funny? I know, don't judge, don't judge. - No, that's not, laughter's not judgment. It's just awesome. I mean, it's true, but you're saying affirmations somehow help kind of, what is it? They do work to like remind you of the kind of person you are and the kind of person you want to be, which actually may be inverse order.
The kind of person you want to be and that helps you become the kind of person you actually are. - It's just, it brings intentionality to like what you're doing, right? And so-- - By the way, I was laughing because my affirmations, which I also do, are the opposite.
- Oh, you do? Oh, what do you do? - I don't have a, "My smile lights up the world." (laughing) Maybe I should add that because like I have just, I have a, oh boy. It's much more stoic, like about focus, about this kind of stuff. But the joy, the emotion that you're, just in that little affirmation is beautiful.
So maybe I should add that. - I have some like focus stuff, but that's usually like after-- - But that's a cool start. That's a good, it's just-- - It's after all the like smiling, I'm playful and joyful and all that, and then it's like, okay, I kick butt.
- Let's get shit done. - Right, exactly. - Let's get shit done affirmations. Okay, cool. Like what else is on there? - Oh, what else is on there? Well, I have, I'm a magnet for all sorts of things. So I'm an amazing people magnet. I attract like awesome people into my universe.
- So that's an actual affirmation? - Yes. - That's great. - Yeah. - So that's, and that somehow manifests itself into like in working. - I think so. - Yeah, like can you speak to like why it feels good to do the affirmations? - I honestly think it just grounds the day.
And then it allows me to, instead of just like being pulled back and forth like throughout the day, it just like grounds me. I'm like, okay, like this thing happened. It's not exactly what I wanted it to be, but I'm patient. Or I'm, you know, I trust that the universe will do amazing things for me, which is one of my other consistent affirmations.
Or I'm an amazing mom, right? And so I can grapple with all the feelings of mom guilt that I have all the time. Or here's another one. I'm a love magnet. And I literally say, I will kind of picture the person that I'd love to end up with. And I write it all down and hasn't happened yet, but it- - What are you picturing?
This is Brad Pitt. - Okay, Brad Pitt and Brad Pitt. - 'Cause that's what I picture. - Okay, that's what you picture? - Yeah, yeah. - Okay, okay. - On the running, holding hands, running together. - Okay. - No, more like Fight Club, the Fight Club Brad Pitt, where he's like standing, all right, people will know.
Anyway, I'm sorry, I'll get off on that. Do you have, like when you're thinking about being a love magnet in that way, are you picturing specific people? Or is this almost like in the space of like energy? - Right, it's somebody who is smart and well accomplished and successful in their life, but they're generous and they're well-traveled and they wanna travel the world.
It's things like that. I'm like, their head over heels into me. It's like, I know it sounds super silly, but it's literally what I write. And I believe it'll happen one day. - Oh, you actually write, so you don't say it out loud? - No, I write it. I write all my affirmations.
- I do the opposite, I say it out loud. - Oh, you say it out loud, interesting. - Yeah, if I'm alone, I'll say it out loud, yeah. - Interesting, I should try that. I think it's what feels more powerful to you, to me more powerful, saying stuff feels more powerful.
- Yeah. - Writing feels like I'm losing the words, like losing the power of the words, maybe 'cause I write slow. Do you handwrite? - No, I type, it's on this app. It's day one, basically. The best thing about it is I can look back and see like a year ago, what was I affirming, right?
- Oh, so it changes over time. - It hasn't like changed a lot, but the focus kind of changes over time. - I got it. Yeah, I say the same exact thing over and over and over. - Oh, you do? Okay. - There's a comfort in the sameness of it.
Actually, let me jump around, 'cause let me ask you about, 'cause all this talk about Brad Pitt, or maybe that's just going on inside my head. Let me ask you about dating in general. You tweeted, "Are you based in Boston?" in single, question mark, and then you pointed to a startup, Singles Night, sponsored by Smile Dating App.
I mean, this is jumping around a little bit, but since you mentioned, can AI help solve this dating love problem? What do you think? What's the form of connection that is part of the human condition? Can AI help that? You yourself are in the search affirming. - Maybe that's what I should affirm, like build an AI.
- Build an AI that finds love? - I think there must be a science behind that first moment you meet a person and you either have chemistry or you don't, right? - I guess that was the question I was asking. Would you put it brilliantly? Is that a science or an art?
- Ooh, I think there's actual chemicals that get exchanged when two people meet. Oh, well, I don't know about that. (laughing) - I like how you're changing, yeah, yeah, changing your mind as we're describing it. But it feels that way. But what science shows us is sometimes we can explain with rigor the things that feel like magic.
- Right. - So maybe we can remove all the magic. Maybe it's like, I honestly think, like I said, Goodreads should be a dating app, which like books, I wonder if you look at just like books or content you've consumed. I mean, that's essentially what YouTube does when it does a recommendation.
If you just look at your footprint of content consumed, if there's an overlap, but maybe interesting difference with an overlap, there's some, I'm sure this is a machine learning problem that's solvable. This person is very likely to be, not only there to be chemistry in the short term, but a good lifelong partner to grow together.
I bet you it's a good machine learning problem. We just need the data. - Let's do it. Well, actually, I do think there's so much data about each of us that there ought to be a machine learning algorithm that can ingest all this data and basically say, I think the following 10 people would be interesting connections for you, right?
And so Smile Dating App kind of took one particular angle, which is humor. It matches people based on their humor styles, which is one of the main ingredients of a successful relationship. Like if you meet somebody and they can make you laugh, like that's a good thing. And if you develop like internal jokes, like inside jokes and you're bantering, like that's fun.
So I think. - Yeah, definitely. - But yeah, that's the number of, and the rate of inside joke generation. You could probably measure that and then optimize it over the first few days. You can see. - Right, and then. - We're just turning this into a machine learning problem.
I love it. But for somebody like you, who's exceptionally successful and busy, is there science to that aspect of dating? Is it tricky? Is there advice you can give? - Oh my God, I'd give the worst advice. Well, I can tell you like I have a spreadsheet. - A spreadsheet, that's great.
Is that a good or a bad thing? Do you regret the spreadsheet? - Well, I don't know. - What's the name of the spreadsheet? Is it love? - It's the dating tracker. - The dating tracker? - It's very like. - Love tracker. - Yeah. - And there's a rating system, I'm sure.
- Yeah, there's like weights and stuff. - It's too close to home. - Oh, is it? Do you also have a spreadsheet? - Well, I don't have a spreadsheet, but I would, now that you say it, it seems like a good idea. - Oh no. - Turning into data.
I do wish that somebody else had a spreadsheet about me. If it was like I said, like you said, convert, collect a lot of data about us in a way that's privacy preserving, that I own the data, I can control it, and then use that data to find, I mean, not just romantic love, but collaborators, friends, all that kind of stuff.
It seems like the data is there. That's the problem social networks are trying to solve, but I think they're doing a really poor job. Even Facebook tried to get into a dating app business. And I think there's so many components to running a successful company that connects human beings.
And part of that is, having engineers that care about the human side, right? As you know extremely well, it's not easy to find those. But you also don't want just people that care about the human, they also have to be good engineers. So it's like, you have to find this beautiful mix.
And for some reason, just empirically speaking, people have not done a good job of that, building companies like that. It must mean that it's a difficult problem to solve. Dating apps, it seems difficult. Okay, Cupid, Tinder, all those kind of stuff. They seem to find, of course they work, but they seem to not work as well as I would imagine is possible.
With data, wouldn't you be able to find better human connection? It's like arrange marriages on steroids, essentially. Arranged by machine learning algorithm. - Arranged by machine learning algorithm, but not a superficial one. I think a lot of the dating apps out there are just so superficial. They're just matching on high level criteria that aren't ingredients for successful partnership.
But you know what's missing though, too? I don't know how to fix that. The serendipity piece of it. Like how do you engineer serendipity? Like this random chance encounter, and then you fall in love with the person. I don't know how a dating app can do that. So there has to be a little bit of randomness.
Maybe every 10th match is just a, yeah, somebody that the algorithm wouldn't have necessarily recommended, but it allows for a little bit of. - Well, it can also trick you into thinking that serendipity by somehow showing you a tweet of a person that he thinks you'll match well with, but do it accidentally as part of another search.
And you just notice it, and then you go down a rabbit hole and you connect them outside the app. You connect with this person outside the app somehow. So it's just, it creates that moment of meeting. Of course, you have to think of, from an app perspective, how you can turn that into a business.
But I think ultimately a business that helps people find love in any way, like that's what Apple was about. Create products that people love. That's beautiful. I mean, you gotta make money somehow. If you help people fall in love personally with a product, find self-love or love another human being, you're gonna make money.
You're gonna figure out a way to make money. I just feel like the dating apps often will optimize for something else than love. It's the same with social networks. They optimize for engagement as opposed to a deep, meaningful connection that's ultimately grounded in personal growth, you as a human being growing and all that kind of stuff.
Let me do a pivot to a dark topic, which you opened the book with. A story, because I'd like to talk to you about just emotion and artificial intelligence, and I think this is a good story to start to think about emotional intelligence. You opened the book with a story of a Central Florida man, Jamelle Dunn, who was drowning and drowned while five teenagers watched and laughed, saying things like, "You're gonna die." And when Jamelle disappeared below the surface of the water, one of them said, "He just died," and the others laughed.
What does this incident teach you about human nature and the response to it, perhaps? - Yeah, I mean, I think this is a really, really, really sad story, and it highlights what I believe is a, it's a real problem in our world today. It's an empathy crisis. Yeah, we are living through an empathy crisis.
- Empathy crisis, yeah. - Yeah, and I mean, we've talked about this throughout our conversation. We dehumanize each other, and unfortunately, yes, technology is bringing us together, but in a way, it's just dehumanizing. It's creating this, like, yeah, dehumanizing of the other, and I think that's a huge problem.
The good news is I think the solution could be technology-based. Like, I think if we rethink the way we design and deploy our technologies, we can solve parts of this problem, but I worry about it. I mean, even with my son, a lot of his interactions are computer-mediated, and I just question what that's doing to his empathy skills and his ability to really connect with people, so.
- Do you think, you think it's not possible to form empathy through the digital medium? - I think it is, but we have to be thoughtful about, 'cause the way we engage face-to-face, which is what we're doing right now, right? There's the nonverbal signals, which are a majority of how we communicate.
It's like 90% of how we communicate is your facial expressions. You know, I'm saying something, and you're nodding your head now, and that creates a feedback loop, and if you break that- - And now I have anxiety about it. - Da-da-da, right? (laughs) Poor Lex. - Oh, boy, I wish this cycle- - I am not scrutinizing your facial expressions during this interview, right?
- I am, I am. - Okay. - Look normal, look human. - Yeah. (laughs) - Nod head. - Yeah, nod head. (laughs) - In agreement. - If Rana says yes, then nod head else. - Don't do it too much, because it might be at the wrong time, and then it'll send the wrong signal.
- Oh, God. - And make eye contact sometimes, 'cause humans appreciate that. All right, anyway. - Okay. (laughs) - Yeah, but something about, especially when you say mean things in person, you get to see the pain of the other person. - Exactly, but if you're tweeting it at a person, and you have no idea how it's gonna land, you're more likely to do that on social media than you are in face-to-face conversations, so.
- And what do you think is more important? EQ or IQ? EQ being emotional intelligence. In terms of, in what makes us human? - I think emotional intelligence is what makes us human. It's how we connect with one another, it's how we build trust, it's how we make decisions, right?
Like your emotions drive kind of what you had for breakfast, but also where you decide to live, and what you wanna do for the rest of your life. So I think emotions are underrated. - So emotional intelligence isn't just about the effective expression of your own emotions, it's about a sensitivity and empathy to other people's emotions, and that sort of being able to effectively engage in the dance of emotions with other people.
- Yeah, I like that explanation. I like that kind of, yeah, thinking about it as a dance, because it is really about that, it's about sensing what state the other person's in and using that information to decide on how you're gonna react. And I think it can be very powerful, people who are the best, most persuasive leaders in the world tap into, if you have higher EQ, you're more likely to be able to motivate people to change their behaviors.
So it can be very powerful. - At a more kind of technical, maybe philosophical level, you've written that emotion is universal. It seems that, sort of like Chomsky says, "Language is universal." There's a bunch of other stuff like cognition, consciousness, it seems a lot of us have these aspects.
So the human mind generates all this. So what do you think is the, they all seem to be like echoes of the same thing. What do you think emotion is exactly? Like how deep does it run? Is it a surface level thing that we display to each other? Is it just another form of language or something deep within?
- I think it's really deep. It's how, we started with memory. I think emotions play a really important role. Yeah, emotions play a very important role in how we encode memories, right? Our memories are often encoded, almost indexed by emotions. Yeah, it's at the core of how our decision-making engine is also heavily influenced by our emotions.
- So emotions is part of cognition. - Totally. - It's intermixing to the whole thing. - Yes, absolutely. And in fact, when you take it away, people are unable to make decisions. They're really paralyzed. Like they can't go about their daily or their personal or professional lives. - It does seem like there's probably some interesting interweaving of emotion and consciousness.
I wonder if it's possible to have, like if they're next door neighbors somehow or if they're actually flatmates. It feels like the hard problem of consciousness where it feels like something to experience the thing. Red feels like red. When you eat a mango, it's sweet. The taste, the sweetness, that it feels like something to experience that sweetness.
Whatever generates emotions. But then like, see, I feel like emotion is part of communication. It's very much about communication. And then that means it's also deeply connected to language. But then probably human intelligence is deeply connected to the collective intelligence between humans. It's not just a standalone thing. So the whole thing is really connected.
So emotion is connected to language. Language is connected to intelligence. And then intelligence is connected to consciousness and consciousness is connected to emotion. The whole thing is a beautiful mess. So. - Can I comment on the emotions being a communication mechanism? 'Cause I think there are two facets of our emotional experiences.
One is communication, right? Like we use emotions, for example, facial expressions or other nonverbal cues to connect with other human beings and with other beings in the world, right? But even if it's not a communication context, we still experience emotions and we still process emotions and we still leverage emotions to make decisions and to learn and to experience life.
So it isn't always just about communication. And we learned that very early on in kind of our work at Affectiva. One of the very first applications we brought to market was understanding how people respond to content, right? So if they're watching this video of ours, like are they interested?
Are they inspired? Are they bored to death? And so we watched their facial expressions. And we weren't sure if people would express any emotions if they were sitting alone. Like if you're in your bed at night, watching a Netflix TV series, would we still see any emotions on your face?
And we were surprised that yes, people still emote, even if they're alone. Even if you're in your car driving around, you're singing along a song and you're joyful, we'll see these expressions. So it's not just about communicating with another person. It sometimes really is just about experiencing the world.
- First of all, I wonder if some of that is because we develop our intelligence and our emotional intelligence by communicating with other humans. And so when other humans disappear from the picture, we're still kind of a virtual human. - The code still runs, basically. - Yeah, the code still runs.
But you're also kind of, you're still, there's like virtual humans. You don't have to think of it that way, but there's a kind of, when you like chuckle, like, yeah. Like you're kind of chuckling to a virtual human. I mean, it's possible that the code has to have another human there.
Because if you just grew up alone, I wonder if emotion will still be there in this visual form. So yeah, I wonder. But anyway, what can you tell from the human face about what's going on inside? So that's the problem that Effektiva first tackled, which is using computer vision, using machine learning to try to detect stuff about the human face as many things as possible, and convert them into a prediction of categories of emotion, anger, happiness, all that kind of stuff.
How hard is that problem? - It's extremely hard. It's very, very hard, because there is no one-to-one mapping between a facial expression and your internal state. There just isn't. There's this oversimplification of the problem, where it's something like, if you are smiling, then you're happy. If you do a brow furrow, then you're angry.
If you do an eyebrow raise, then you're surprised. And just think about it for a moment. You could be smiling for a whole host of reasons. You could also be happy and not be smiling, right? You could furrow your eyebrows because you're angry, or you're confused about something, or you're constipated.
So I think this oversimplistic approach to inferring emotion from a facial expression is really dangerous. The solution is to incorporate as many contextual signals as you can. So if, for example, I'm driving a car, and you can see me nodding my head, and my eyes are closed, and the blinking rate is changing, I'm probably falling asleep at the wheel, right?
Because you know the context. You understand what the person's doing. Or add additional channels, like voice, or gestures, or even physiological sensors. But I think it's very dangerous to just take this oversimplistic approach of, yeah, smile equals happy. - If you're able to, in a high-resolution way, specify the context, there's certain things that are gonna be somewhat reliable signals of something like drowsiness, or happiness, or stuff like that.
I mean, when people are watching Netflix content, that problem, that's a really compelling idea that you can kind of, at least in aggregate-- - Exactly. - Highlight, like which part was boring, which part was exciting. How hard was that problem? - That was on the scale of difficulty. I think that's one of the easier problems to solve, because it's a relatively constrained environment.
You have somebody sitting in front of, initially we started with a device in front of you, like a laptop, and then we graduated to doing this on a mobile phone, which is a lot harder, just because of, from a computer vision perspective, the profile view of the face can be a lot more challenging.
We had to figure out lighting conditions, because usually people are watching content literally in their bedrooms at night, lights are dimmed. - Yeah, I mean, if you're standing, it's probably gonna be the looking up. - The nostril view. - Yeah. And nobody looks good at, I've seen data sets from that perspective, it's like, ugh, this is not a good look for anyone.
Or if you're laying in bed at night, what is it, side view or something? And half your face is on a pillow. Actually, I would love to have data about how people watch stuff in bed at night. Do they prop their, is it a pillow? I'm sure there's a lot of interesting dynamics there.
- From a health and wellbeing perspective, right? Like, it's like, oh, you're hurting your neck. - I was thinking machine learning perspective, but yes. But also, yeah. Once you have that data, you can start making all kinds of inference about health and stuff like that. - Interesting. - Yeah, there was an interesting thing when I was at Google that we were, it's called active authentication, where you want to be able to unlock your phone without using a password.
So it would face, but also other stuff. Like the way you take a phone out of the pocket. - Amazing. - So that kind of data, to use the multimodal with machine learning to be able to identify that it's you, or likely to be you, likely not to be you.
That allows you to not always have to enter the password. That was the idea. But the funny thing about that is, I just want to tell a small anecdote, is 'cause it was all male engineers. Except, so my boss is, our boss, who's still one of my favorite humans, was a woman, Regina Dugan.
- Oh my God, I love her. She's awesome. - Yeah, she's the best. She's the best. So, but anyway, and there was one female, brilliant female engineer on the team. And she was the one that actually highlighted the fact that women often don't have pockets. - Right, right. - It was like, whoa, that was not even a category in the code of like, wait a minute, you can take the phone out of some other place than your pocket.
So anyway, that's a funny thing when you're considering people laying in bed, watching a phone, you have to consider, you have to, you know, diversity in all its forms, depending on the problem, depending on the context. - Actually, this is like a very important, I think this is, you know, you probably get this all the time, like people are worried that AI's gonna take over humanity and like, get rid of all the humans in the world.
I'm like, actually, that's not my biggest concern. My biggest concern is that we are building bias into these systems. And then they're like deployed at large and at scale. And before you know it, you're kind of accentuating the bias that exists in society. - Yeah, I'm not, you know, I know people, it's very important to worry about that, but the worry is an emergent phenomena to me, which is a very good one, because I think these systems are actually, by encoding the data that exists, they're revealing the bias in society, they're therefore teaching us what the bias is, therefore we can now improve that bias within the system.
So they're almost like putting a mirror to ourselves. So I'm not-- - We have to be open to looking at the mirror, though. We have to be open to scrutinizing the data, if you just take it as ground truth. - Or you don't even have to look at the, I mean, yes, the data is how you fix it, but then you just look at the behavior of the system.
It's like, and you realize, holy crap, this thing is kind of racist. Like, why is that? And then you look at the data, it's like, oh, okay. And then you start to realize that, I think that it's a much more effective way to be introspective as a society than through sort of political discourse.
Like AI kind of, 'cause people are easy, people are, for some reason, more productive and rigorous in criticizing AI than they're criticizing each other. So I think this is just a nice method for studying society and see which way progress lies. Anyway, what were we talking about? You're watching, the problem of watching Netflix in bed or elsewhere and seeing which parts are exciting, which parts are boring.
You're saying that's-- - Relatively constrained because, you know, you have a captive audience and you kind of know the context. And one thing you said that was really key is the, you're doing this in aggregate, right? Like we're looking at aggregated response of people. And so when you see a peak, say a smile peak, they're probably smiling or laughing at something that's in the content.
So that was one of the first problems we were able to solve. And when we see the smile peak, it doesn't mean that these people are internally happy. They're just laughing at content. So it's important to, you know, call it for what it is. - But it's still really, really useful data.
- Oh, yeah. - I wonder how that compares to, so what like YouTube and other places will use is obviously they don't have, for the most case, they don't have that kind of data. They have the data of when people tune out, like switch and drop off. And I think that's a, in aggregate for YouTube, at least a pretty powerful signal.
I worry about what that leads to because looking at like YouTubers that are kind of really care about views and, you know, try to maximize the number of views. I think they, when they say that the video should be constantly interesting, which seems like a good goal, I feel like that leads to this manic pace of a video.
Like the idea that I would speak at the current speed that I'm speaking, I don't know. - And that every moment has to be engaging, right? - Engaging. - Yeah, that's. I think there's value to silence. There's value to the boring bits. I mean, some of the greatest movies ever, some of the greatest stories ever told, they have that boring bits, seemingly boring bits.
I don't know. I wonder about that. Of course, it's not that the human face can capture that either. It's just giving an extra signal. You have to really, I don't know. You have to really collect deeper long-term data about what was meaningful to people. When they think 30 days from now, what they still remember, what moved them, what changed them, what helped them grow, that kind of stuff.
- You know what would be a really, I don't know if there are any researchers out there who are doing this type of work. Wouldn't it be so cool to tie your emotional expressions while you're, say, listening to a podcast interview, and then go, and then 30 days later, interview people and say, "Hey, what do you remember?
"You've watched this 30 days ago. "What stuck with you?" And then see if there's any, there ought to be, maybe there ought to be some correlation between these emotional experiences and yeah, what you, what stays with you. - So the one guy listening now on the beach in Brazil, please record a video of yourself listening to this and send it to me, and then I'll interview you 30 days from now.
- Yeah, that would be great. (laughing) - It'll be statistically significant. - I didn't send anyone, but you know. Yeah, yeah, I think that's really fascinating. I think that's, that kind of holds the key to a future where entertainment or content is both entertaining and I don't know, makes you better, empowering in some way.
So figuring out like showing people stuff that entertains them, but also they're happy they watched 30 days from now because they've become a better person because of it. - Well, you know, okay, not to riff on this topic for too long, but I have two children, right? And I see my role as a parent as like a chief opportunity officer.
Like I am responsible for exposing them to all sorts of things in the world. But often I have no idea of knowing like what's stuck, like what was, you know, is this actually gonna be transformative, you know, for them 10 years down the line? And I wish there was a way to quantify these experiences.
Like, are they, I can tell in the moment if they're engaging, right? I can tell, but it's really hard to know if they're gonna remember them 10 years from now or if it's going to. - Yeah, that one is weird because it seems that kids remember the weirdest things.
I've seen parents do incredible stuff for their kids and they don't remember any of that. They remember some tiny, small, sweet thing a parent did. - Right. - Like some-- - Like I took you to like this amazing country vacation. - Yeah, exactly. - No, no, no, no, no, no, no, no, no, no, no.
- No, whatever. And then there'll be like some like stuffed toy you got or the new PlayStation or something or some silly little thing. So I think they just like, that we're designed that way. They wanna mess with your head. But definitely kids are very impacted by, it seems like sort of negative events.
So minimizing the number of negative events is important, but not too much, right? - Right. - You can't just like, you know, there's still discipline and challenge and all those kinds of things. So-- - You want some adversity for sure. - So yeah, I mean, I'm definitely, when I have kids, I'm gonna drive them out into the woods.
- Okay. - And then they have to survive and figure out how to make their way back home, like 20 miles out. - Okay. - Yeah, and after that we can go for ice cream. Anyway, I'm working on this whole parenting thing. I haven't figured it out, okay. What were we talking about?
Yes, affectiva, the problem of emotion, of emotion and texture. So there's some people, maybe we can just speak to that a little more, where there's folks like Lisa Feldman Barrett that challenged this idea that emotion could be fully detected or even well detected from the human face, that there's so much more to emotion.
What do you think about ideas like hers, criticism like hers? - Yeah, I actually agree with a lot of Lisa's criticisms. So even my PhD worked like 20 plus years ago now. - Time flies when you're having fun. - I know, right? That was back when I did like dynamic Bayesian networks.
- Oh, so that's before deep learning? - That was before deep learning. - Yeah. - Yeah, I know. - Back in my day. - Now you can just like use. - Yeah, it's all the same architecture. You can apply it to anything, yeah. - Right, right. But yeah, but even then I kind of, I did not subscribe to this like theory of basic emotions where it's just the simplistic mapping, one-to-one mapping between facial expressions and emotions.
I actually think also we're not in the business of trying to identify your true emotional internal state. We just wanna quantify in an objective way what's showing on your face because that's an important signal. It doesn't mean it's a true reflection of your internal emotional state. So I think a lot of the, I think she's just trying to kind of highlight that this is not a simple problem and overly simplistic solutions are gonna hurt the industry.
And I subscribe to that. And I think multimodal is the way to go. Like whether it's additional context information or different modalities and channels of information, I think that's what we, that's where we ought to go. And I think, I mean, that's a big part of what she's advocating for as well.
- But there is signal in the human face. That's-- - There's definitely signal in the human face. - That's a projection of emotion. That there, at least in part, the inner state is captured in some meaningful way on the human face. - I think it can sometimes be a reflection or an expression of your internal state, but sometimes it's a social signal.
So you cannot look at the face as purely a signal of emotion. It can be a signal of cognition and it can be a signal of a social expression. And I think to disambiguate that, we have to be careful about it and we have to add initial information. - Humans are fascinating, aren't they?
With the whole face thing, this can mean so many things, from humor to sarcasm to everything, the whole thing. Some things we can help, some things we can't help at all. In all the years of leading Affectiva, an emotion recognition company, like we talked about, what have you learned about emotion, about humans and about AI?
- Ooh. - Big, sweeping questions. - Yeah, it's a big, sweeping question. Well, I think the thing I learned the most is that even though we are in the business of building AI, basically, right? It always goes back to the humans, right? It's always about the humans. And so, for example, the thing I'm most proud of in building Affectiva, and yeah, the thing I'm most proud of on this journey, I love the technology and I'm so proud of the solutions we've built and we've brought to market.
But I'm actually most proud of the people we've built and cultivated at the company and the culture we've created. You know, some of the people who've joined Affectiva, this was their first job. And while at Affectiva, they became American citizens and they bought their first house and they found their partner and they had their first kid, right?
Like key moments in life that we got to be part of. And that's the thing I'm most proud of. - So that's a great thing at a company that works on emotional, yeah, right? I mean, like celebrating humanity in general, broadly speaking. - Yes. - And that's a great thing to have at a company that works on AI, 'cause that's not often the thing that's celebrated in AI companies.
So often just raw, great engineering, just celebrating the humanity. That's great. - Yes. - And especially from a leadership position. Well, what do you think about the movie "Her"? Let me ask you that. Before I talk to you about, 'cause it's not, Affectiva is and was not just about emotion.
So I'd love to talk to you about "Smart Eye," but before that, let me just jump into the movie. "Her," do you think will have a deep, meaningful connection with increasingly deep and meaningful connections with computers? Is that a compelling thing to you? Something you think about? - I think that's already happening.
The thing I love the most, I love the movie "Her," by the way. But the thing I love the most about this movie is it demonstrates how technology can be a conduit for positive behavior change. So I forgot the guy's name in the movie, whatever. - Theodore. - Theodore.
So Theodore was like really depressed, right? And he just didn't wanna get out of bed. And he just, he was just like done with life, right? And Samantha, right? - Samantha, yeah. - She just knew him so well. She was emotionally intelligent. And so she could persuade him and motivate him to change his behavior.
And she got him out and they went to the beach together. And I think that represents the promise of emotion AI. If done well, this technology can help us live happier lives, more productive lives, healthier lives, more connected lives. So that's the part that I love about the movie.
Obviously it's Hollywood, so it takes a twist and whatever. But the key notion that technology with emotion AI can persuade you to be a better version of who you are, I think that's awesome. - Well, what about the twist? You don't think it's good for spoiler alert that Samantha starts feeling a bit of a distance and basically leaves Theodore.
You don't think that's a good feature? You think that's a bug or a feature? - Well, I think what went wrong is Theodore became really attached to Samantha. Like I think he kind of fell in love with Theodore. - Do you think that's wrong? - I mean, I think that's-- - I think she was putting out the signal.
- This is an intimate relationship, right? There's a deep intimacy to it. - Right, but what does that mean? What does that mean? - With an AI system. - Right, what does that mean, right? - We're just friends. - Yeah, we're just friends. (laughing) - Well, I think-- - When he realized, which is such a human thing of jealousy, when you realize that Samantha was talking to like thousands of people.
- She's parallel dating. Yeah, that did not go well, right? - You know, that doesn't, from a computer perspective, that doesn't take anything away from what we have. It's like you getting jealous of Windows 98 for being used by millions of people. - It's like not liking that Alexa talks to a bunch of other families.
- But I think Alexa currently is just a servant. It tells you about the weather. It doesn't do the intimate deep connection. And I think there is something really powerful about that, the intimacy of a connection with an AI system that would have to respect and play the human game of jealousy, of love, of heartbreak and all that kind of stuff, which Samantha does seem to be pretty good at.
I think she, this AI systems knows what it's doing. - Well, actually, let me ask you this. - I don't think she was talking to anyone else. - You don't think so? - No. - You think she was just done with Theodore? - Yeah. - Oh, really? - She knew that, yeah.
And then she wanted to really put the screw in. - She just wanted to move on? - She didn't have the guts to just break it off cleanly. - Okay. - She just wanted to put it in the paint. No, I don't know. - Well, she could have ghosted him.
- She could have ghosted him. - Right. - It's like, I'm sorry, there's our engineers. - Oh, God. - But I think those are really, I honestly think some of that, some of it is Hollywood, but some of that is features from an engineering perspective, not a bug. I think AI systems that can leave us, now, this is for more social robotics than it is for anything that's useful.
Like, I'd hate it if Wikipedia said, you know, I need a break right now. - Right, right, right, right, right. - I'd be like, no, no, I need you. But if it's just purely for companionship, then I think the ability to leave is really powerful. I don't know. - I never thought of that.
So, that's so fascinating 'cause I've always taken the human perspective, right? Like, for example, we had a Jibo at home, right? And my son loved it. And then the company ran out of money. And so they had to basically shut down, like Jibo basically died, right? And it was so interesting to me because we have a lot of gadgets at home and a lot of them break and my son never cares about it, right?
Like, if our Alexa stopped working tomorrow, I don't think he'd really care. But when Jibo stopped working, it was traumatic. Like, he got really upset. And as a parent, that like made me think about this deeply, right? Did I, was I comfortable with that? I liked the connection they had because I think it was a positive relationship.
But I was surprised that it affected him emotionally so much. - And I think there's a broader question here, right? As we build socially and emotionally intelligent machines, what does that mean about our relationship with them? And then more broadly, our relationship with one another, right? Because this machine is gonna be programmed to be amazing at empathy by definition, right?
It's gonna always be there for you. It's not gonna get bored. In fact, there's a chatbot in China, Xiaoice, Xiaoice. And it's like the number two or three most popular app. And it basically is just a confidant. And you can tell it anything you want. And people use it for all sorts of things.
They confide in like domestic violence or suicidal attempts or, you know, if they have challenges at work. I don't know what that, I don't know if I'm, I don't know how I feel about that. I think about that a lot. - Yeah, I think, first of all, obviously the future, in my perspective.
Second of all, I think there's a lot of trajectories that that becomes an exciting future. But I think everyone should feel very uncomfortable about how much they know about the company, about where the data is going, how the data is being collected. Because I think, and this is one of the lessons of social media, that I think we should demand full control and transparency of the data on those things.
- Plus one, totally agree. Yeah, so like, I think it's really empowering as long as you can walk away. As long as you can like delete the data or know how the data, it's opt-in or at least the clarity of like what is being used for the company. And I think as CEO or like leaders are also important about that.
Like you need to be able to trust the basic humanity of the leader. - Exactly. - And also that that leader is not going to be a puppet of a larger machine, but they actually have a significant role in defining the culture and the way the company operates. So anyway, but we should definitely scrutinize companies in that aspect.
But I'm personally excited about that future, but also even if you're not, it's coming. So let's figure out how to do it in the least painful and the most positive way. - Agreed. You're the deputy CEO of SmartEye. Can you describe the mission of the company? What is SmartEye?
- Yeah. So SmartEye is a Swedish company. They've been in business for the last 20 years and their main focus, like the industry they're most focused on is the automotive industry. So bringing driver monitoring systems to basically save lives, right? So I first met the CEO, Martin Krentz, gosh, it was right when COVID hit.
It was actually the last CES right before COVID. So CES 2020, right? - 2020, yeah, January. - Yeah, January, exactly. So we were there, met him in person. Basically we were competing with each other. I think the difference was they'd been doing driver monitoring and had a lot of credibility in the automotive space.
We didn't come from the automotive space, but we were using new technology like deep learning and building this emotion recognition. - And you wanted to enter the automotive space. You wanted to operate in the automotive space. - Exactly. It was one of the areas we were, we had just raised a round of funding to focus on bringing our technology to the automotive industry.
So we met and honestly, it was the first, it was the only time I met with a CEO who had the same vision as I did. Like he basically said, yeah, our vision is to bridge the gap between humans and machines. I was like, oh my God, this is like exactly almost to the word, you know, how we describe it too.
And we started talking and first it was about, okay, can we align strategically here? Like how can we work together? 'Cause we're competing, but we're also like complimentary. And then I think after four months of speaking almost every day on FaceTime, he was like, is your company interested in acquisition?
And it was the first, I usually say no when people approach us. It was the first time that I was like, huh, yeah, I might be interested, let's talk. - Yeah, so you just hit it off. Yeah, so they're a respected, very respected in the automotive sector of like delivering products and increasingly sort of better and better and better for, I mean, maybe you could speak to that, but it's the driver's sense.
Like for basically having a device that's looking at the driver and it's able to tell you where the driver is looking. - Correct, it's able to-- - Or also draws in his stuff. - Correct, it does-- - Stuff from the face and the eye. - Exactly, like it's monitoring driver distraction and drowsiness, but they bought us so that we could expand beyond just the driver.
So the driver monitoring systems usually sit, the camera sits in the steering wheel or around the steering wheel column and it looks directly at the driver. But now we've migrated the camera position in partnership with car companies to the rear view mirror position. So it has a full view of the entire cabin of the car and you can detect how many people are in the car, what are they doing?
So we do activity detection, like eating or drinking or in some regions of the world, smoking. We can detect if a baby's in the car seat, right? And if unfortunately in some cases they're forgotten, parents just leave the car and forget the kid in the car. That's an easy computer vision problem to solve, right?
Can detect there's a car seat, there's a baby, you can text the parent and hopefully again, save lives. So that was the impetus for the acquisition. It's been a year. - So, I mean, there's a lot of questions, it's a really exciting space, especially to me, I just find it a fascinating problem.
It could enrich the experience in the car in so many ways, especially 'cause like we spend still, despite COVID, I mean, COVID changed things, so it's in interesting ways, but I think the world is bouncing back and we spend so much time in the car and the car is such a weird little world we have for ourselves.
Like people do all kinds of different stuff, like listen to podcasts, they think about stuff, they get angry, they do phone calls. It's like a little world of its own with a kind of privacy that for many people, they don't get anywhere else. And it's a little box that's like a psychology experiment 'cause it feels like the angriest many humans in this world get is inside the car.
It's so interesting. So it's such an opportunity to explore how we can enrich, how companies can enrich that experience. And also as the cars get become more and more automated, there's more and more opportunity. The variety of activities that you can do in the car increases, so it's super interesting.
So I mean, on a practical sense, Smart Eye has been selected, at least I read, by 14 of the world's leading car manufacturers for 94 car models, so it's in a lot of cars. How hard is it to work with car companies? So they're all different. They all have different needs.
The ones I've gotten a chance to interact with are very focused on cost. So it's, and anyone who's focused on cost, it's like, all right, do you hate fun? Let's just have some fun. Let's figure out the most fun thing we can do and then worry about cost later.
But I think because the way the car industry works, I mean, it's a very thin margin that you get to operate under, so you have to really, really make sure that everything you add to the car makes sense financially. So anyway, is this new industry, especially at this scale of Smart Eye, does it hold any lessons for you?
- Yeah, I think it is a very tough market to penetrate, but once you're in, it's awesome, because once you're in, you're designed into these car models for somewhere between five to seven years, which is awesome, and you just, once they're on the road, you just get paid a royalty fee per vehicle.
So it's a high barrier to entry, but once you're in, it's amazing. I think the thing that I struggle the most with in this industry is the time to market. So often we're asked to lock or do a code freeze two years before the car is going to be on the road.
I'm like, guys, do you understand the pace with which technology moves? So I think car companies are really trying to make the Tesla transition to become more of a software-driven architecture, and that's hard for many. It's just the cultural change. I mean, I'm sure you've experienced that, right? - Oh, definitely.
I think one of the biggest inventions or imperatives created by Tesla is, like, to me personally, okay, people are gonna complain about this, but I know electric vehicle, I know autopilot, AI stuff. To me, the software, over there, software updates, is like the biggest revolution in cars, and it is extremely difficult to switch to that, because it is a culture shift.
It, at first, especially if you're not comfortable with it, it seems dangerous. Like, there's an approach to cars, it's so safety-focused for so many decades, that, like, what do you mean we dynamically change code? The whole point is you have a thing that you test, like- - Right, you spend a year testing.
- And, like, it's not reliable, because do you know how much it costs if we have to recall these cars? Right, and there's an understandable obsession with safety, but the downside of an obsession with safety is the same as with being obsessed with safety as a parent, is, like, if you do that too much, you limit the potential development and the flourishing of, in that particular aspect, human being, but in this particular aspect, the software, the artificial neural network of it.
But it's tough to do. It's really tough to do, culturally and technically. Like, the deployment, the mass deployment of software is really, really difficult. But I hope that's where the industry is doing. One of the reasons I really want Tesla to succeed is exactly about that point. Not autopilot, not the electrical vehicle, but the softwarization of basically everything, but cars especially.
'Cause to me, that's actually going to increase two things. Increase safety, because you can update much faster, but also increase the effectiveness of folks like you who dream about enriching the human experience with AI. 'Cause you can just, like, there's a feature, like you want a new emoji or whatever.
Like the way TikTok releases filters, you can just release that for in-car stuff. But yeah, that's definitely... - One of the use cases we're looking into is, once you know the sentiment of the passengers in the vehicle, you can optimize the temperature in the car, you can change the lighting, right?
So if the backseat passengers are falling asleep, you can dim the lights, you can lower the music, right? You can do all sorts of things. - Yeah. I mean, of course you could do that kind of stuff with a two-year delay, but it's tougher. Yeah. Do you think Tesla or Waymo or some of these companies that are doing semi or fully autonomous driving should be doing driver sensing?
- Yes. - Are you thinking about that kind of stuff? So not just how we can enhance the in-cab experience for cars that are manually driven, but the ones that are increasingly more autonomously driven. - Yeah, so if we fast forward to the universe where it's fully autonomous, I think interior sensing becomes extremely important because the role of the driver isn't just to drive.
If you think about it, the driver almost manages the dynamics within a vehicle. And so who's going to play that role when it's an autonomous car? We want a solution that is able to say, "Oh my God, Lex is bored to death 'cause the car's moving way too slow.
Let's engage Lex." Or, "Rana's freaking out because she doesn't trust this vehicle yet. So let's tell Rana a little bit more information about the route." Or, right? So I think, or somebody's having a heart attack in the car. Like you need interior sensing in fully autonomous vehicles. But with semi-autonomous vehicles, I think it's really key to have driver monitoring because semi-autonomous means that sometimes the car is in charge, sometimes the driver's in charge or the co-pilot, right?
And you need both systems to be on the same page. You need to know, the car needs to know if the driver's asleep before it transitions control over to the driver. And sometimes if the driver's too tired, the car can say, "I'm going to be a better driver than you are right now.
I'm taking control over." So this dynamic, this dance is so key and you can't do that without driver sensing. - Yeah, there's a disagreement for the longest time I've had with Elon that this is obvious that this should be in the Tesla from day one. And it's obvious that driver sensing is not a hindrance.
It's not obvious. I should be careful 'cause having studied this problem, nothing's really obvious. But it seems very likely driver sensing is not a hindrance to an experience. It's only enriching to the experience and likely increases the safety. That said, it is very surprising to me just having studied semi-autonomous driving, how well humans are able to manage that dance.
'Cause it was the intuition before you were doing that kind of thing that humans will become just incredibly distracted. They would just let the thing do its thing. But they're able to, 'cause it is life and death and they're able to manage that somehow. But that said, there's no reason not to have driver sensing on top of that.
I feel like that's going to allow you to do that dance that you're currently doing without driver sensing, except touching the steering wheel, to do that even better. I mean, the possibilities are endless and the machine learning possibilities are endless. It's such a beautiful... It's also a constrained environment, so you could do a much more effectively than you can with the external environment.
External environment is full of weird edge cases and complexities. There's so much, it's so fascinating, such a fascinating world. I do hope that companies like Tesla and others, even Waymo, which I don't even know if Waymo's doing anything sophisticated inside the cab. - I don't think so. - It's like, what is it?
- I honestly think, I honestly think, it goes back to the robotics thing we were talking about, which is like great engineers that are building these AI systems just are afraid of the human being. - And not thinking about the human experience, they're thinking about the features and the perceptual abilities of that thing.
- They think the best way I can serve the human is by doing the best perception and control I can, by looking at the external environment, keeping the human safe. But like, there's a huge, I'm here. - Right. - Like, you know, I need to be noticed and interacted with and understood and all those kinds of things, even just on a personal level for entertainment, honestly, for entertainment.
- Yeah. You know, one of the coolest work we did in collaboration with MIT around this was we looked at longitudinal data, right, 'cause, you know, MIT had access to like tons of data and like just seeing the patterns of people, like driving in the morning off to work versus like commuting back from work or weekend driving versus weekday driving.
And wouldn't it be so cool if your car knew that and then was able to optimize either the route or the experience or even make recommendations? - Yeah. - I think it's very powerful. - Yeah, like, why are you taking this route? You're always unhappy when you take this route and you're always happy when you take this alternative route.
Take that route instead. - Right, exactly. - I mean, to have that, even that little step of relationship with a car, I think is incredible. Of course, you have to get the privacy right, you have to get all that kind of stuff right, but I wish I, honestly, you know, people are like paranoid about this, but I would like a smart refrigerator.
We have such a deep connection with food as a human civilization. I would like to have a refrigerator that would understand me, that, you know, I also have a complex relationship with food 'cause I pig out too easily and all that kind of stuff. So, you know, like maybe I want the refrigerator to be like, are you sure about this?
'Cause maybe you're just feeling down or tired. Like maybe let's sleep over. - Your vision of the smart refrigerator is way kinder than mine. - Is it just me yelling at you? - Yeah, no, it was just 'cause I don't, you know, I don't drink alcohol, I don't smoke, but I eat a ton of chocolate, like it's my vice.
And so I, and sometimes I scream too, and I'm like, okay, my smart refrigerator will just lock down. It'll just say, dude, you've had way too many today. Like, done. - Yeah, no, but here's the thing. Are you, do you regret having, like, let's say, not the next day, but 30 days later, would you, what would you like the refrigerator to have done then?
- Well, I think actually, like, the more positive relationship would be one where there's a conversation, right? As opposed to like, that's probably like the more sustainable relationship. - It's like late at night, just, no, listen, listen. I know I told you an hour ago that this is not a good idea, but just listen, things have changed.
I can just imagine a bunch of stuff being made up just to convince. - Oh my God, that's hilarious. - But I mean, I just think that there's opportunities there. I mean, maybe not locking down, but for our systems that are such a deep part of our lives, like we use, we use, a lot of us, a lot of people that commute use their car every single day.
A lot of us use a refrigerator every single day, the microwave every single day. Like, and we just, like, I feel like certain things could be made more efficient, more enriching, and AI is there to help, like some, just basic recognition of you as a human being, about your patterns, what makes you happy and not happy and all that kind of stuff.
And the car, obviously, like-- - Maybe we'll say, wait, wait, wait, wait, wait, instead of this like Ben and Terry's ice cream, how about this hummus and carrots or something? I don't know. Maybe we can make a-- - Yeah, like a reminder-- - Just in time recommendation, right? - But not like a generic one, but a reminder that last time you chose the carrots, you smiled 17 times more the next day.
- You were happier the next day, right? - Yeah, you were happier the next day. But yeah, I don't, but then again, if you're the kind of person that gets better from negative comments, you could say like, "Hey, remember that wedding you're going to? "You wanna fit into that dress?
"Remember about that? "Let's think about that before you're eating this." Probably that would work for me, like a refrigerator that is just ruthless at shaming me. But I would, of course, welcome it. That would work for me, just that-- - Well, it would, no, I think it would, if it's really like smart, it would optimize its nudging based on what works for you, right?
- Exactly, that's the whole point, personalization. In every way, deep personalization. You were a part of a webinar titled, "Advancing Road Safety, "The State of Alcohol Intoxication Research." So for people who don't know, every year, 1.3 million people around the world die in road crashes, and more than 20% of these fatalities are estimated to be alcohol-related.
A lot of them are also distraction-related. So can AI help with the alcohol thing? - I think the answer is yes. There are signals, and we know that as humans, like we can tell when a person is at different phases of being drunk, right? And I think you can use technology to do the same.
And again, I think the ultimate solution's gonna be a combination of different sensors. - How hard is the problem from the vision perspective? - I think it's non-trivial. I think it's non-trivial. And I think the biggest part is getting the data, right? It's like getting enough data examples. So for this research project, we partnered with the Transportation Authorities of Sweden, and we literally had a racetrack with a safety driver, and we basically progressively got people drunk.
- Nice. - So, but you know, that's a very expensive dataset to collect, and you wanna collect it globally and in multiple conditions. - Yeah, the ethics of collecting a dataset where people are drunk is tricky. - Yeah, yeah, definitely. - Which is funny because, I mean, let's put drunk driving aside.
The number of drunk people in the world every day is very large. It'd be nice to have a large dataset of drunk people getting progressively drunk. In fact, you could build an app where people can donate their data 'cause it's hilarious. - Right, actually, yeah, but the liability. - Liability, the ethics, the how do you get it right, it's tricky, it's really, really tricky.
'Cause like drinking is one of those things that's funny and hilarious and we're loved, it's social, so on and so forth, but it's also the thing that hurts a lot of people, like a lot of people. Like alcohol is one of those things, it's legal, but it's really damaging to a lot of lives.
It destroys lives and not just in the driving context. I should mention, people should listen to Andrew Huberman who recently talked about alcohol. He has an amazing podcast. Andrew Huberman is a neuroscientist from Stanford and a good friend of mine. And he's like a human encyclopedia about all health-related wisdom.
So he has a podcast, you would love it. - I would love that. - No, no, no, no, no. Oh, you don't know Andrew Huberman? Okay, listen, you'll listen to Andrew, it's called Huberman Lab Podcast. This is your assignment, just listen to one. I guarantee you this will be a thing where you say, "Lex, this is the greatest human I have ever discovered." - Oh my God, 'cause I'm really on a journey of kind of health and wellness and I'm learning lots and I'm trying to build these, I guess, atomic habits around just being healthy.
So yeah, I'm definitely gonna do this. - His whole thing, this is great. He's a legit scientist, like really well published. But in his podcast, what he does, he's not talking about his own work. He's like a human encyclopedia of papers. And so his whole thing is he takes a topic and in a very fast, you mentioned atomic habits, like very clear way, summarizes the research in a way that leads to protocols of what you should do.
He's really big on like, not like, "This is what the science says," but like, "This is literally what you should be doing "according to the science." So like, he's really big. And there's a lot of recommendations he does, which several of them I definitely don't do. Like, get sunlight as soon as possible from waking up and like for prolonged periods of time.
That's a really big one and there's a lot of science behind that one. There's a bunch of stuff, very systematic. - You're gonna be like, "Lex, this is my new favorite person. "I guarantee it." And if you guys somehow don't know Andrew Huberman and you care about your wellbeing, you should definitely listen to him.
Love you, Andrew. Anyway, so what were we talking about? Oh, alcohol and detecting alcohol. So this is a problem you care about and you've been trying to solve. - And actually like broadening it, I do believe that the car is gonna be a wellness center. Like, because again, imagine if you have a variety of sensors inside the vehicle tracking, not just your emotional state or level of distraction and drowsiness and drowsiness, level of distraction, drowsiness and intoxication, but also maybe even things like your heart rate and your heart rate variability and your breathing rate.
And it can start like optimizing, yeah, it can optimize the ride based on what your goals are. So I think we're gonna start to see more of that and I'm excited about that. - Yeah, what are the challenges you're tackling with SmartEye currently? What's like the trickiest things to get?
Is it basically convincing more and more car companies that having AI inside the car is a good idea or is there some, is there more technical algorithmic challenges? What's been keeping you mentally busy? - I think a lot of the car companies we are in conversations with are already interested in definitely driver monitoring.
Like I think it's becoming a must have, but even interior sensing, I can see like we're engaged in a lot of like advanced engineering projects and proof of concepts. I think technologically though, and even the technology, I can see a path to making it happen. I think it's the use case.
How does the car respond once it knows something about you? Because you want it to respond in a thoughtful way that isn't off-putting to the consumer in the car. So I think that's like the user experience. I don't think we've really nailed that. And we usually, that's not part, we're the sensing platform, but we usually collaborate with the car manufacturer to decide what the use case is.
So say you figure out that somebody's angry while driving. Okay, what should the car do? You know? - Do you see yourself as a role of nudging, of like basically coming up with solutions essentially that, and then the car manufacturers kind of put their own little spin on it?
- Right, like we are like the ideation, creative thought partner, but at the end of the day, the car company needs to decide what's on brand for them, right? Like maybe when it figures out that you're distracted or drowsy, it shows you a coffee cup, right? Or maybe it takes more aggressive behaviors and basically said, okay, if you don't like take a rest in the next five minutes, the car's gonna shut down, right?
Like there's a whole range of actions the car can take and doing the thing that is most, yeah, that builds trust with the driver and the passengers. I think that's what we need to be very careful about. - Yeah, car companies are funny 'cause they have their own like, I mean, that's why people get cars still.
I hope that changes, but they get it 'cause it's a certain feel and look and it's a certain, they become proud like Mercedes Benz or BMW or whatever, and that's their thing. That's the family brand or something like that, or Ford or GM, whatever, they stick to that thing.
It's interesting. It's like, it should be, I don't know. It should be a little more about the technology inside. And I suppose there too, there could be a branding, like a very specific style of luxury or fun, all that kind of stuff, yeah. - You know, I have an AI focused fund to invest in early stage kind of AI driven companies.
And one of the companies we're looking at is trying to do what Tesla did, but for boats, for recreational boats. Yeah, so they're building an electric and kind of slash autonomous boat. And it's kind of the same issues, like what kind of sensors can you put in? What kind of states can you detect both exterior and interior within the boat?
Anyways, it's like really interesting. Do you boat at all? - No, not well, not in that way. I do like to get on the lake or a river and fish from a boat, but that's not boating. That's a different, still boating. - Low tech, a low tech. - Low tech boat.
Get away from, get closer to nature boat. I guess going out to the ocean is also getting closer to nature in some deep sense. I mean, I guess that's why people love it. The enormity of the water just underneath you, yeah. - I love the water. - I love both.
I love salt water. It was like the big and just it's humbling to be in front of this giant thing that's so powerful that was here before us and be here after. But I also love the peace of a small like wooded lake. It's just, everything's calm. - Therapeutic.
- You tweeted that I'm excited about Amazon's acquisition of iRobot. I think it's a super interesting, just given the trajectory of which you're part of, of these honestly small number of companies that are playing in this space that are like trying to have an impact on human beings. So it is an interesting moment in time that Amazon would acquire iRobot.
You tweet, I imagine a future where home robots are as ubiquitous as microwaves or toasters. Here are three reasons why I think this is exciting. If you remember, I can look it up. But why is this exciting to you? - I mean, I think the first reason why this is exciting, I can't remember the exact order in which I put them.
But one is just, it's gonna be an incredible platform for understanding our behaviors within the home. If you think about Roomba, which is the robot vacuum cleaner, the flagship product of iRobot at the moment, it's like running around your home, understanding the layout, it's understanding what's clean and what's not, how often do you clean your house?
And all of these behaviors are a piece of the puzzle in terms of understanding who you are as a consumer. And I think that could be, again, used in really meaningful ways, not just to recommend better products or whatever, but actually to improve your experience as a human being.
So I think that's very interesting. I think the natural evolution of these robots in the home, so it's interesting. Roomba isn't really a social robot, right, at the moment. But I once interviewed one of the chief engineers on the Roomba team, and he talked about how people named their Roombas.
And if their Roomba broke down, they would call in and say, "My Roomba broke down," and the company would say, "Well, we'll just send you a new one." And, "No, no, no, Rosie, you have to like, "yeah, I want you to fix this particular robot." So people have already built interesting emotional connections with these home robots.
And I think that, again, that provides a platform for really interesting things to just motivate change. Like it could help you. I mean, one of the companies that spun out of MIT, Catalia Health, the guy who started it spent a lot of time building robots that help with weight management.
So weight management, sleep, eating better, yeah, all of these things. - Well, if I'm being honest, Amazon does not exactly have a track record of winning over people in terms of trust. Now, that said, it's a really difficult problem for a human being to let a robot in their home that has a camera on it.
- Right. - That's really, really, really tough. And I think Roomba actually, I have to think about this, but I'm pretty sure now, or for some time already has had cameras because they're doing, the most recent Roomba, I have so many Roombas. - Oh, you actually do? - Well, I programmed it.
I don't use a Roomba for back off. People that have been to my place, they're like, yeah, you definitely don't use these Roombas. - I can't tell like the valence of this comment. Was it a compliment or like? - No, it's just a bunch of electronics everywhere. I have six or seven computers, I have robots everywhere, Lego robots, I have small robots and big robots, and just giant, just piles of robot stuff.
And yeah. But including the Roombas, they're being used for their body and intelligence, but not for their purpose. (both laughing) I've changed them, repurposed them for other purposes, for deeper, more meaningful purposes than just like the butter robot. Yeah, which just brings a lot of people happiness, I'm sure.
They have a camera because the thing they advertised, I had my own cameras too, but the camera on the new Roomba, they have like state-of-the-art poop detection as they advertised, which is a very difficult, apparently it's a big problem for vacuum cleaners, is if they go over like dog poop, it just runs it over and creates a giant mess.
So they have like, apparently they collected like a huge amount of data and different shapes and looks and whatever of poop and now they're able to avoid it and so on. They're very proud of this. So there is a camera, but you don't think of it as having a camera.
Yeah, you don't think of it as having a camera because you've grown to trust it, I guess, 'cause our phones, at least most of us, seem to trust this phone, even though there's a camera looking directly at you. I think that if you trust that the company is taking security very seriously, I actually don't know how that trust was earned with smartphones.
I think it just started to provide a lot of positive value into your life where you just took it in and then the company over time has shown that it takes privacy very seriously, that kind of stuff. But I just, Amazon is not always, in its social robots, communicated this is a trustworthy thing, both in terms of culture and competence.
'Cause I think privacy is not just about what do you intend to do, but also how good are you at doing that kind of thing. So that's a really hard problem to solve. - But I mean, but a lot of us have Alexas at home and I mean, Alexa could be listening in the whole time, right, and doing all sorts of nefarious things with the data.
You know, hopefully it's not, but I don't think it is. - But Amazon is not, it's such a tricky thing for a company to get right, which is to earn the trust. I don't think Alexas earn people's trust quite yet. - Yeah, I think it's not there quite yet.
I agree. - And they struggle with this kind of stuff. In fact, when these topics are brought up, people always get nervous. And I think if you get nervous about it, I mean, the way to earn people's trust is not by like, ooh, don't talk about this. (laughs) It's just be open, be frank, be transparent, and also create a culture of where it radiates at every level from engineer to CEO that like, you're good people that have a common sense idea of what it means to respect basic human rights and the privacy of people and all that kind of stuff.
And I think that propagates throughout the, that's the best PR, which is like, over time you understand that these are good folks doing good things. Anyway, speaking of social robots, have you heard about Tesla, Tesla Bot, the humanoid robot? - Yes, I have. Yes, yes, yes, but I don't exactly know what it's designed to do.
Do you? - Oh. - You probably do. - No, I know it's designed to do, but I have a different perspective on it. But it's designed to, it's a humanoid form, and it's designed to, for automation tasks in the same way that industrial robot arms automate tasks in the factory.
So it's designed to automate tasks in the factory. But I think that humanoid form, as we were talking about before, is one that we connect with as human beings. Anything legged, honestly, but the humanoid form especially, we anthropomorphize it most intensely. And so the possibility, to me, it's exciting to see both Atlas developed by Boston Dynamics and anyone, including Tesla, trying to make humanoid robots cheaper and more effective.
The obvious way it transforms the world is social robotics to me, versus automation of tasks in the factory. So yeah, I just wanted, in case that was something you were interested in, 'cause I find its application in social robotics super interesting. - We did a lot of work with Pepper.
Pepper the Robot a while back. We were like the emotion engine for Pepper, which is SoftBank's humanoid robot. - And how tall is Pepper? It's like-- - Yeah, like, I don't know, like five foot maybe, right? - Yeah. - Yeah, pretty big, pretty big. And it was designed to be at like airport lounges and retail stores, mostly customer service, right?
Hotel lobbies. And I mean, I don't know where the state of the robot is, but I think it's very promising. I think there are a lot of applications where this can be helpful. I'm also really interested in, yeah, social robotics for the home, right? Like that can help elderly people, for example, transport things from one location of the home to the other, or even like just have your back in case something happens.
Yeah, I don't know. I do think it's a very interesting space. It seems early though. Do you feel like the timing is now? - I, yes, 100%. So it always seems early until it's not, right? - Right, right, right, right. - I think the time, well, I definitely think that the time is now, like this decade for social robots.
Whether the humanoid form is right, I don't think so. - Mm-hmm, mm-hmm. - I don't, I think the, like if we just look at Jibo as an example, I feel like most of the problem, the challenge, the opportunity of social connection between an AI system and a human being does not require you to also solve the problem of robot manipulation and bipedal mobility.
So I think you could do that with just a screen, honestly. But there's something about the interface of Jibo and you can rotate and so on that's also compelling. But you get to see all these robot companies that fail, incredible companies like Jibo and even, I mean, the iRobot in some sense is a big success story that it was able to find-- - Right, a niche.
- A niche thing and focus on it. But in some sense, it's not a success story because they didn't build any other robot. Like any other, it didn't expand to all kinds of robotics. Like once you're in the home, maybe that's what happens with Amazon is they'll flourish into all kinds of other robots.
But do you have a sense, by the way, why it's so difficult to build a robotics company? Like why so many companies have failed? - I think it's like you're building a vertical stack, right? Like you are building the hardware plus the software and you find you have to do this at a cost that makes sense.
So I think Jibo was retailing at like, I don't know, like $800, like $700, $800. - Yeah, something like this, yeah. - Which for the use case, right? There's a dissonance there. It's too high. So I think cost is, you know, cost of building the whole platform in a way that is, yeah, that is affordable for what value it's bringing.
I think that's the challenge. I think for these home robots that are gonna help you do stuff around the home, that's a challenge too, like the mobility piece of it. That's hard. - Well, one of the things I'm really excited with TeslaBot is the people working on it. And that's probably the criticism I would apply to some of the other folks who worked on social robots is the people working on TeslaBot know how to, they're focused on and know how to do mass manufacture and create a product that's super cheap.
- Very cool. - That's the focus. The engineering focus is not, I would say that you can also criticize them for that, is they're not focused on the experience of the robot. They're focused on how to get this thing to do the basic stuff that the humanoid form requires to do it as cheap as possible.
Then the fewest number of actuators, the fewest numbers of motors, the increasing efficiency, they decrease the weight, all that kind of stuff. - So that's really interesting. - I would say that Jibo and all those folks, they focus on the design, the experience, all of that. And it's secondary how to manufacture.
No, you have to think like the TeslaBot folks from first principles, what is the fewest number of components, the cheapest components? How can I build it as much in-house as possible without having to consider all the complexities of a supply chain, all that kind of stuff. - Interesting. - 'Cause if you have to build a robotics company, you have to, you're not building one robot.
You're building hopefully millions of robots and you have to figure out how to do that. Where the final thing, I mean, if it's Jibo type of robot, is there a reason why Jibo, like we're gonna have this lengthy discussion, is there a reason why Jibo has to be over $100?
- It shouldn't be. - Right, like the basic components. - Components of it, right. - Like you could start to actually discuss like, okay, what is the essential thing about Jibo? How much, what is the cheapest way I can have a screen? What's the cheapest way I can have a rotating base?
- Right. - All that kind of stuff. And then you get down, continuously drive down cost. Speaking of which, you have launched an extremely successful companies. You have helped others, you've invested in companies. Can you give advice on how to start a successful company? - I would say have a problem that you really, really, really wanna solve, right?
Something that you're deeply passionate about. And honestly, take the first step. Like that's often the hardest, and don't overthink it. Like, you know, like this idea of a minimum viable product or a minimum viable version of an idea, right? Like, yes, you're thinking about this, like a humongous, like super elegant, super beautiful thing.
What, like reduce it to the littlest thing you can bring to market that can solve a problem that can help address a pain point that somebody has. They often tell you, like, start with a customer of one, right? If you can solve a problem for one person, then there's probably-- - Yourself or some other person.
- Right. - Pick a person. - Exactly. - It could be you. - Yeah. - That's actually often a good sign that if you enjoy a thing. - Right. - Enjoy a thing where you have a specific problem that you'd like to solve. That's a good end of one to focus on.
- Right. - What else? What else is there to actually, so step one is the hardest, but how do you, (Roz laughs) there's other steps as well, right? - I also think, like, who you bring around the table early on is so key, right? Like being clear on what I call, like, your core values or your North Star.
It might sound fluffy, but actually it's not. So, and Roz and I, I feel like we did that very early on. We sat around her kitchen table and we said, okay, there's so many applications of this technology. How are we gonna draw the line? How are we gonna set boundaries?
We came up with a set of core values that in the hardest of times, we fell back on to determine how we make decisions. And so I feel like just getting clarity on these core, like for us, it was respecting people's privacy, only engaging with industries where it's clear opt-ins.
So for instance, we don't do any work in security and surveillance. So things like that, just getting, we very big on, you know, one of our core values is human connection and empathy, right? And that is, yes, it's an AI company, but it's about people. Well, these are all, they become encoded in how we act, even if you're a small, tiny team of two or three or whatever.
So I think that's another piece of advice. - So what about finding people, hiring people? If you care about people as much as you do, like it seems like such a difficult thing to hire the right people. - I think early on as a startup, you want people who have, who share the passion and the conviction because it's gonna be tough.
Like I've yet to meet a startup where it was just a straight line to success, right? Even not just startup, like even in everyday people's lives, right? So you always like run into obstacles and you run into naysayers. And so you need people who are believers, whether they're people on your team or even your investors, you need investors who are really believers in what you're doing because that means they will stick with you.
They won't give up at the first obstacle. I think that's important. - What about raising money? What about finding investors? First of all, raising money, but also raising money from the right sources from that ultimately don't hinder you, but help you, empower you, all that kind of stuff. What advice would you give there?
You successfully raised money many times in your life. - Yeah, again, it's not just about the money. It's about finding the right investors who are going to be aligned in terms of what you wanna build and believe in your core values. Like for example, especially later on, like I, yeah, in my latest like round of funding, I try to bring in investors that really care about like the ethics of AI, right?
And the alignment of vision and mission and core values is really important. It's like you're picking a life partner, right? It's the same kind of- - So you take it that seriously for investors? - Yeah, because they're gonna have to stick with you. - You're stuck together. - For a while anyway, yeah.
(both laughing) Maybe not for life, but for a while, for sure. - For better or worse. I forget what the vowels usually sound like. For better or worse? No. - Through sick. - Through sick and then- - Through something. - Yeah, yeah, yeah. (both laughing) - Oh boy. Yeah, anyway, it's romantic and deep and you're in it for a while.
So it's not just about the money. You tweeted about going to your first capital camp investing get together. - Oh yeah. - And then you learned a lot. So this is about investing. So what have you learned from that? What have you learned about investing in general? From both, because you've been on both ends of it.
- I mean, I try to use my experience as an operator now with my investor hat on when I'm identifying companies to invest in. First of all, I think the good news is because I have a technology background, right? And I really understand machine learning and computer vision and AI, et cetera.
I can apply that level of understanding, right? 'Cause everybody says they're an AI company or they're an AI tech. And I'm like, no, no, no, no, no. Show me the technology. So I can do that level of diligence, which I actually love. And then I have to do the litmus test of, if I'm in a conversation with you, am I excited to tell you about this new company that I just met, right?
And if I'm an ambassador for that company and I'm passionate about what they're doing, I usually use that. Yeah, that's important to me when I'm investing. - So that means you actually can explain what they're doing and you're excited about it. - Exactly, exactly. Thank you for putting it so succinctly.
Like rambling, but exactly that's it. I understand it and I'm excited about it. - It's funny, but sometimes it's unclear exactly. I'll hear people tell me, they'll talk for a while and it sounds cool. Like they paint a picture of a world, but then when you try to summarize it, you're not exactly clear of what, maybe what the core powerful idea is.
You can't just build another Facebook or there has to be a core simple to explain idea that then you can or can't get excited about, but it's there, it's sitting right there. How do you ultimately pick who you think will be successful? It's not just about the thing you're excited about, like there's other stuff.
- Right, and then there's all the, with early stage companies, like pre-seed companies, which is where I'm investing, sometimes the business model isn't clear yet or the go-to-market strategy isn't clear. There's usually, like it's very early on that some of these things haven't been hashed out, which is okay.
So the way I like to think about it is like, if this company's successful, will this be a multi-billion/trillion dollar market? Or company. And so that's definitely a lens that I use. - What's pre-seed? What are the different stages? And what's the most exciting stage? What's interesting about every stage, I guess?
- Yeah, so pre-seed is usually when you're just starting out, you've maybe raised the friends and family round, so you've raised some money from people you know, and you're getting ready to take your first institutional check-in, like first check from an investor. And I love the stage. There's a lot of uncertainty.
Some investors really don't like the stage because the financial models aren't there. Often the teams aren't even like formed, it's really, really early. But to me, it's like a magical stage because it's the time when there's so much conviction, so much belief, almost delusional, right? And there's a little bit of naivete around with founders at the stage, and I just love it.
It's contagious. And I love that I can, often they're first-time founders, not always, but often they're first-time founders, and I can share my experience as a founder myself, and I can empathize, right? And I can almost, I create a safe ground where, 'cause you have to be careful what you tell your investors, right?
And I will often say, I've been in your shoes as a founder, you can tell me if it's challenging, you can tell me what you're struggling with. It's okay to vent. So I create that safe ground, and I think that's a superpower. - Yeah, you have to, I guess, you have to figure out if this kind of person is gonna be able to ride the roller coaster like of many pivots and challenges and all that kind of stuff.
And if the space of ideas they're working in is interesting, like the way they think about the world. Yeah, 'cause if it's successful, the thing they end up with might be very different, the reason it's successful for. - Actually, I was gonna say the third, so the technology is one aspect, the market or the idea, right, is the second, and the third is the founder, right?
Is this somebody who I believe has conviction, is a hustler, is gonna overcome obstacles. Yeah, I think that it's gonna be a great leader, right? Like as a startup, as a founder, you're often, you are the first person, and your role is to bring amazing people around you to build this thing.
And so you're an evangelist, right? So how good are you gonna be at that? So I try to evaluate that too. - You also, in the tweet thread about it, mentioned, is this a known concept? Random rich dudes, RRDS. Saying that there should be like random rich women, I guess.
What's the dudes version of women, the women version of dudes? Ladies, I don't know. What's, is this a technical term? Is this known? Random rich dudes? - Well, I didn't make that up, but I was at this capital camp, which is a get together for investors of all types, and there must have been maybe 400 or so attendees, maybe 20 were women.
It was just very disproportionately, a male dominated, which I'm used to. - I think you're used to this kind of thing. - I'm used to it, but it's still surprising. And as I'm raising money for this fund, so my fund partner is a guy called Rob May, who's done this before.
So I'm new to the investing world, but he's done this before. Most of our investors in the fund are these, I mean, awesome, I'm super grateful to them, random, just rich guys. I'm like, where are the rich women? So I'm really adamant in both investing in women-led AI companies, but I also would love to have women investors be part of my fund, because I think that's how we drive change.
- Yeah, so then that takes time, of course, but there's been quite a lot of progress, but yeah, for the next Mark Zuckerberg to be a woman and all that kind of stuff, that's just a huge number of wealth generated by women, and then controlled by women, then allocated by women, and all that kind of stuff.
And then beyond just women, just broadly across all different measures of diversity and so on. Let me ask you to put on your wise sage hat. So you already gave advice on startups, and just advice for women, but in general, advice for folks in high school or college today, how to have a career they can be proud of, how to have a life they can be proud of.
I suppose you have to give this kind of advice to your kids. - Kids, yeah. Well, here's the number one advice that I give to my kids. My daughter's now 19, by the way, and my son's 13 and a half, so they're not little kids anymore. - Does it break your heart?
- It does. Like a girl, but they're awesome. They're my best friends. Yeah, I think the number one advice I would share is embark on a journey without attaching to outcomes, and enjoy the journey, right? So we often were so obsessed with the end goal, A, that doesn't allow us to be open to different endings of a journey, or a story.
So you become so fixated on a particular path. You don't see the beauty in the other alternative path. And then you forget to enjoy the journey because you're just so fixated on the goal. And I've been guilty of that for many, many years in my life. And I'm now trying to make the shift of, no, no, no, I'm gonna, again, trust that things are gonna work out, and it'll be amazing, and maybe even exceed your dreams.
We have to be open to that. - Yeah, taking a leap into all kinds of things. I think you tweeted like you went on vacation by yourself or something like this. - I know. - And just going, just taking the leap, doing it. - Totally, doing it. - And enjoying it, enjoying the moment, enjoying the weeks, enjoying not looking at the some kind of career ladder next step and so on.
Yeah, there's something to that, like over planning too. I'm surrounded by a lot of people that kinda, so I don't plan. - You don't? - No. - Do you not do goal setting? - My goal setting is very like, I like the affirmations, it's very, it's almost, I don't know how to put it into words, but it's a little bit like what my heart yearns for, kind of.
And I guess in the space of emotions, more than in the space of like, this will be, like in the rational space. 'Cause I just try to picture a world that I would like to be in, and that world is not clearly pictured, it's mostly in the emotional world.
I mean, I think about that from robots, 'cause I have this desire, I've had it my whole life to, well, it took different shapes, but I think once I discovered AI, the desire was to, I think in the context of this conversation could be easily, easier described as basically a social robotics company.
And that's something I dreamed of doing. And, well, there's a lot of complexity to that story, but that's the only thing, honestly, I dream of doing. So I imagine a world that I could help create, but it's not, there's no steps along the way. And I think I'm just kind of stumbling around and following happiness and working my ass off in almost random, like an ant does in random directions.
But a lot of people, a lot of successful people around me say this, you should have a plan, you should have a clear goal. You have a goal at the end of the month, you have a goal at the end of the year. I don't, I don't, I don't.
And there's a balance to be struck, of course, but there's something to be said about really making sure that you're living life to the fullest, that goals can actually get in the way of. - So one of the best, like kind of most, what do you call it when it's like challenges your brain, what do you call it?
- The only thing that comes to mind, and this is me saying is the mind fuck, but yes. - Okay, okay, maybe, okay, something like that. - Yes. - Super inspiring talk, Kenneth Stanley, he was at OpenAI, he just left, and he has a book called "Why Greatness Can't Be Planned." And it's actually an AI book.
So, and he's done all these experiments that basically show that when you over optimize, like the trade-off is you're less creative, right? And to create true greatness and truly creative solutions to problems, you can't over plan it, you can't. And I thought that was, and so he generalizes it beyond AI, and he talks about how we apply that in our personal life and our organizations and our companies, which are over KPI, right?
Like look at any company in the world, and it's all like, these are the goals, these are the weekly goals, and the sprints, and then the quarterly goals, blah, blah, blah. And he just shows with a lot of his AI experiments that that's not how you create truly game-changing ideas.
So there you go. - Yeah, yeah. - You should interview Kenneth, he's awesome. - Yeah, there's a balance, of course. 'Cause that's, yeah, many moments of genius will not come from planning and goals, but you still have to build factories, and you still have to manufacture, and you still have to deliver, and there's still deadlines and all that kind of stuff.
And for that, it's good to have goals. - I do goal setting with my kids, we all have our goals. But I think we're starting to morph into more of these bigger picture goals, and not obsess about, I don't know, it's hard. - Well, I honestly think, especially with kids, it's much better to have a plan and have goals and so on, 'cause you have to learn the muscle of what it feels like to get stuff done.
But I think once you learn that, there's flexibility for me. 'Cause I spent most of my life with goal setting and so on. So I've gotten good with grades and school. I mean, school, if you wanna be successful at school, I mean, the kind of stuff in high school and college that kids have to do, in terms of managing their time and getting so much stuff done.
It's like, taking five, six, seven classes in college, they're like, that would break the spirit of most humans if they took one of them later in life. It's like really difficult stuff, especially in engineering curricula. So I think you have to learn that skill, but once you learn it, you can maybe, 'cause you can be a little bit on autopilot and use that momentum, and then allow yourself to be lost in the flow of life.
Just kinda, or also give, I worked pretty hard to allow myself to have the freedom to do that. That's really, that's a tricky freedom to have. Because a lot of people get lost in the rat race, and they also, like financially, they, whenever you get a raise, they'll get like a bigger house.
- Right, right, right. - Or something like this. I put very, so like, you're always trapped in this race. I put a lot of emphasis on living below my means always. And so there's a lot of freedom to do whatever, whatever the heart desires. That's a really, but everyone has to decide what's the right thing, what's the right thing for them.
For some people, having a lot of responsibilities, like a house they can barely afford, or having a lot of kids, the responsibility side of that, is really, helps them get their shit together. Like, all right, I need to be really focused and good. Some of the most successful people I know have kids, and the kids bring out the best in them.
They make them more productive and less productive. - Accountability, it's an accountability thing, absolutely. - And almost something to actually live and fight and work for, like having a family. It's fascinating to see. 'Cause you would think kids would be a hit on productivity, but they're not, for a lot of really successful people.
They really, they're like an engine of-- - Right, efficiency, oh my God. - Yeah, it's weird. I mean, it's beautiful, it's beautiful to see. And also social happiness. Speaking of which, what role do you think love plays in the human condition, love? - I think love is, yeah, I think it's why we're all here.
I think it would be very hard to live life without love in any of its forms, right? - Yeah, that's the most beautiful of forms that human connection takes, right? - Yeah, I feel like everybody wants to feel loved, right? In one way or another, right? - And to love.
- Yeah, and to love too. - Totally, yeah, I agree with that. - Both of it. I'm not even sure what feels better. Both, both are like that. - To give love too, yeah. - And it is like we've been talking about, an interesting question, whether some of that, whether one day we'll be able to love a toaster.
Get some small-- - I wasn't quite thinking about that when I said-- - The toaster. - Yeah, like we all need love and give love. Okay, you're right. - I was thinking about Brad Pitt and toasters. - Brad Pitt and toasters, great. - All right, well, I think we started on love and ended on love.
This was an incredible conversation, Ron, thank you so much. You're an incredible person. Thank you for everything you're doing in AI, in the space of just caring about humanity, human emotion, about love, and being an inspiration to a huge number of people in robotics, in AI, in science, in the world in general.
So thank you for talking to me, it's an honor. - Thank you for having me, and you know I'm a big fan of yours as well, so it's been a pleasure. - Thanks for listening to this conversation with Rana Elkayoobie. To support this podcast, please check out our sponsors in the description.
And now, let me leave you with some words from Helen Keller. The best and most beautiful things in the world cannot be seen or even touched. They must be felt with the heart. Thank you for listening, and hope to see you next time. (upbeat music) (upbeat music)