Primary Topic
This episode critically examines Scott Alexander's views on AI safety, focusing on his arguments against the possibility and concerns of an AI-driven apocalypse.
Episode Summary
Main Takeaways
- Historical patterns of true and false apocalyptic predictions can inform our understanding of current AI safety debates.
- The lack of precedents for small-scale AI-related disasters suggests current fears are more akin to moral panics.
- Technological and cultural fears often prove exaggerated, with beneficial advancements arising instead of predicted disasters.
- Effective risk management strategies historically involved actionable responses, unlike the vague fears surrounding AI.
- Recognizing and distinguishing between valid threats and moral panics is crucial in rational public discourse and policy making.
Episode Chapters
1: Introduction to AI Safety
A brief overview of Scott Alexander's position on AI safety, emphasizing his skepticism about AI-related apocalypses. Simone Collins: "Scott’s skepticism stems from a pattern where fears about new technologies are frequently unfounded."
2: Historical Context of Apocalyptic Predictions
Discussion on how true and false historical predictions can provide insight into current AI safety arguments. Malcolm Collins: "Looking back, genuine threats had small precedents that were ignored, unlike the AI fears today."
3: Differentiating Between Moral Panics and Legitimate Concerns
Explores how to distinguish between irrational fears and legitimate safety concerns by comparing them to historical events. Simone Collins: "We must learn from history to see that not all technological changes bring the doom they're predicted to."
4: Implications for AI Safety Debates
Conclusion stressing the importance of historical awareness in shaping rational debates about AI safety. Malcolm Collins: "By understanding the past, we can avoid repeating mistakes in our discussions about AI."
Actionable Advice
- Educate yourself on historical technological fears: Understanding past moral panics can provide perspective on current fears about AI.
- Engage in informed debate: Use historical examples to challenge unfounded fears in discussions about AI safety.
- Promote rational policy making: Advocate for AI policies based on proven risks and benefits, not on speculative disasters.
- Support interdisciplinary research: Encourage studies that integrate technology assessment with historical analysis to predict future trends accurately.
- Foster public awareness: Help the public differentiate between rational concerns and moral panics regarding new technologies.
About This Episode
In this thought-provoking video, Malcolm and Simone Collins offer a detailed response to Scott Alexander's article on AI apocalypticism. They analyze the historical patterns of accurate and inaccurate doomsday predictions, providing insights into why AI fears may be misplaced. The couple discusses the characteristics of past moral panics, cultural susceptibility to apocalyptic thinking, and the importance of actionable solutions in legitimate concerns. They also explore the rationalist community's tendencies, the pronatalist movement, and the need for a more nuanced approach to technological progress. This video offers a fresh perspective on AI risk assessment and the broader implications of apocalyptic thinking in society.
Malcolm Collins: [00:00:00] I'm quoting from him here, okay? One of the most common arguments against AI safety is, here's an example of a time someone was worried about something, but it didn't happen.
Therefore, AI, which you are worried about, also won't happen. I always give the obvious answer. Okay. But there are other examples of times someone was worried about something and it did happen, right? How do we know AI isn't more like those?
So specifically he is arguing against is every 20 years or so you get one of these apocalyptic movements. And this is why we're discounting this movement this is how he ends the article, so people know this isn't an attack piece, this is what he asked for in the article. He says, conclusion, I genuinely don't know what these people are thinking.
I would like to understand the mindset of people who make arguments like this, but I'm not sure I've succeeded. What is he missing according to you? He is missing something absolutely giant in everything that he's laid out.
And it is a very important point and it's very clear from his write up that this idea had just never occurred to him.
People
Scott Alexander, Simone Collins, Malcolm Collins
Companies
Leave blank if none.
Books
Leave blank if none.
Guest Name(s):
Leave blank if none.
Content Warnings:
None
Transcript
A
I'm quoting from him here. Okay. One of the most common arguments against AI safety is here's an example of a time someone was worried about something, but it didn't happen. Therefore AI, which you are worried about, also won't happen. I always give the obvious answer, okay, but there are other examples of times someone was worried about something and it did happen.
Right. How do we know AI isn't more like those? So specifically, he is arguing against AI is every 20 years or so you get one of these apocalyptic movements. And this is why we're discounting this movement. This is how he ends the article.
So people know this isn't an attack piece. This is what he asked for in the article. He says, conclusion, I genuinely don't know what these people are thinking. I would like to understand the mindset of people who make arguments like this, but I'm not sure I've succeeded. What is he missing according to you?
He is missing something absolutely giant in everything that he's laid out. And it is a very important point, and it's very clear from his write up that this idea had just never occurred to him. Would you like to know more? Hello Simone. I am excited to be here with you today.
We today are going to be creating a video reply response to an argument. Scott Alexander, the guy who writes astral codex ten or Slate Star Codex, depending on what era you were introduced to, his content wrote about arguments against AI, apocalypticism, which are based around. That'll be clear when we get into the piece, because I'm going to read some parts of it that now I should know. This is not a Scott Alexander is not smart or anything like that piece. We actually think Scott Alexander is incredibly intelligent and well meaning, and he is an intellectual who I consider a friend and somebody whose work I enormously respect.
And I am creating this response because the piece is written in a way that actively requests a response. It's like, why do people believe this argument when I find it to be so weak? Like one of those what am I missing here kind of things. Yeah, what am I missing here kind of things. He just clearly, and I like the way he lays out his argument because it's very clear that yes, there's a huge thing he's missing, and it's clear from his argument and the way that he thought about it that he's just literally never considered this point, and it's why he doesn't understand this argument.
So we're going to go over his counter argument and we're going to go over the thing that he happens to be missing, and I'm quoting from him here. Okay. One of the most common arguments against AI safety is here's an example of a time someone was worried about something, but it didn't happen. Therefore, AI, which you are worried about, also won't happen. I always give the obvious answer, okay?
But there are other examples of times someone was worried about something, and it did happen. Right? How do we know AI isn't more like those? The people I'm arguing with always seem so surprised by this response, as if I'm committing some sort of betrayal by destroying their beautiful argument. So, specifically, he is arguing against the form of apocalypticism that when we talk about it more sounds like, or arguing against AI.
Apocalypticism is every 20 years or so, you get one of these apocalyptic movements. And this is why we're discounting this movement. Okay? And I'm going to go further with his argument here. So he says, I keep trying to steal man this argument.
So keep in mind he's trying to steal man it. This is not us saying like, he wants it. Steel man. Okay. I keep trying to steel man this argument, and it keeps resisting my steel.
Manning, for example. Maybe the argument is a failed attempt to gesture at a principle of, quote, most technologies don't go wrong, but people make the same argument with technologies that aren't technologies like global cooling or overpopulation. Maybe the argument is a failed attempt to gesture at a principle of the world is ever destroyed. So doomsday prophecies have an abysmal track record. But overpopulation and global cooling don't claim that no one will die, just that a lot of people will.
And plenty of prophecies about mass death events have come true, e. G. Black plague, World War Two, AIDS. And none of this explains coffee. So there's some weird coffee argument that he comes back to that I don't actually think is important to understand this, but I can read it if you're interested.
B
I'm sufficiently intrigued. Okay. People basically made the thing of what I'm basically reading from him now. Once people were worried about coffee, but now we know coffee is safe, therefore AI will also be safe, which is to say there was a period where everyone was afraid of coffee, and there was a lot of apocalypticism about it, and there really was like, people were afraid of caffeine for a period, and the fears turned out wrong. And then people correlate that with AI.
A
And I think that is a bad argument. But the other type of argument he's making here, so you can see, and I will create a final framing from him here that I think is a pretty good summation of his argument. There is at least one thing that was possible, therefore super intelligent AI is also possible, and an only slightly less hostile reframing. So he's, that's the way that he hears it. When people make this argument, there is at least one thing that was possible, therefore super intelligent AI is also possible and safe, presumably right, because one thing was past technology that we're talking about.
And then he says, in an only slightly less hostile rephrasing, people were wrong when they said nuclear reactions were impossible, therefore they might also be wrong when they say super intelligent AI is possible. Conclusion I genuinely don't know what these people are thinking. And then he says, I would like to understand the mindset. So this is how he ends the article, so people know this isn't an attack piece. This is what he asked for in the article.
He says, conclusion I genuinely don't know what these people are thinking. I would like to understand the mindset of people who make arguments like this, but I'm not sure I've succeeded. The best I can say is that sometimes people on my side make similar arguments. The nuclear chain reaction one, which I don't immediately flag as dumb. And maybe I can follow this thread to figure why they seem tempting sometimes.
All right, so great, great. What is he missing? Actually, I'd almost take a pause moment here to see if our audience, because he is missing something absolutely, absolutely giant in everything that he's laid out. There is a very logical reason to be making this argument, and it is a point that he is missing and everything that he's looking at. And it is a very important point.
And it's very clear from his write up that this idea had just never occurred to him. Is this the Margaret Thatcher irish terrorists idea? No. Okay, can you think, what if I was trying to predict the probability of a current apocalyptic movement being wrong? What would I use in a historic context?
And I usually don't lay out this point because I thought it was so obvious. And now I'm realizing that even fairly smart people, it's not an obvious point. I have no idea. School last people historically have sometimes built up panics about things that didn't happen, and then sometimes people have raised red flags as outliers about things that did end up happening. True.
What we can do to find out if the current event is just a moral panic, or is it actually a legitimate panic, is to correlate it with historical circumstances, to figure out what things did the historical accurate predictions have in common? And what things did the pure moral panics have in common? So what are examples of past genuine apocalypses? So, like the plague, what else? So I went through.
And we'll go through examples of, yeah. It'S history time with Malcolm Collins when. People actually predicted the beginnings of something that was going to be huge and then times. And. Hold on, I should actually word this a bit differently.
B
So the industrial revolution. That's a good one, Simone. We'll get to these in a second. Okay. Okay.
A
The point being is I want to better explain this argument to people because people may still struggle to understand, like, the really core point that he's missing. Historically speaking, people made predictions around things that were beginning to happen within their times becoming huge and apocalyptic events in the future that ended up in mass deaths. We can, from today's perspective, because we now know which of those predictions fell into which categories correlate one. The ways that these communities act, features of the prediction and the types of things they're making predictions about. To find out if somebody today is making a prediction around some current trend leading to mass desk.
Okay. If it's going to fall into the camp of false prediction or accurate prediction by correlating it with the historic predictions. Yeah, I think that the reason why. Because I don't think he's a dumb person. He should have thought of this.
Like, I genuinely think, like, this is not a weird thing to think about. I think the reason he hasn't thought about it is because he's so on the side of AI apocalypticism being something we could should focus on. He just hasn't thought about disconfirming arguments. And when you begin to correlate AI apocalypticism with historic apocalyptic movements, it fits incredibly snugly in the false fear category. So let's go into the historic predictions.
Okay, nice. So the times when they were accurate, all right, were predictions around the black plague, predictions around world war two, predictions around aids, predictions around DDT, predictions around asbestos, predictions around cigarette smoking, native american, warnings around Europeans. Okay. All apocalyptic predictions which ended up becoming true. Now let's go through all of the ones that were incorrect, that developed freak out, almost religious communities around them.
Splitting of the Higgs boson. People thought that would cause a black. Oh, I remember that. Yes. The industrial revolution.
That was a huge one right there. I don't know. That did precipitate the beginning of demographic collapse. It wasn't a problem for the reason people thought it was a problem. Okay.
They thought no one would have any jobs anymore. That was the core fear on the industrial revolution, if you remember. And we'll get more into that. The speed of trains, the reading. We could get to more on the industrial revolution.
If you want to park it in edge case. The reading panic. A lot of people don't know there was a reading panic. Everyone thought that reading would destroy people. And that all of these young women were becoming addicted to reading.
And they very much in the way that we today. Yeah, there was this fear. It was called like, reading madness. Girl would get really into reading. And today we just call that being most nerdy young woman.
Video game violence. Yeah. This is what I didn't know about. Because I went in to try to create as comprehensive a list of all these as possible. The Telegraph.
Critics believe that the telegraph would lead to a decline in literacy, destroy the postal service and contribute to moral decay. The postal service eventually, but amazing. Anyway, radio critics warn that radio would lead to a decline in literacy, encourage mindless entertainment and foster a culture of violence. So for those that aren't aware, no. Literacy has broadly risen since the radio was introduced.
The printing press. There was significant fear that the printing press would spread heretical ideas and misinformation. Didn't it precipitate the reformation? Yeah. So I guess the printing press we can put in the movie legit.
B
Come on. Legit? Yeah, the spinning wheel. No, not really. The printing press really only moved things forwards.
A
The people who were afraid of the printing press, we liked the reformation, but. It still did cause it. It was the beginning of. No, but this doesn't fall into the category of false predictions. Oh, this is like a fascist saying.
I'm afraid that other people might have access to information which has given me power. That's not the spinning wheel. This was in the 13th century. People thought the spinning wheel would lead to the collapse of civilization. Then there was, when coffee was introduced to Europe in the 16th century, I said it was met with suspicion and resistance.
Some religious and political leaders feared that coffee houses would become centers of political dissent and immoral behavior. In 1674, the women's petition against coffee in England. Claimed that coffee made men impotent and was a threat to family life. And. Yeah.
So what do these things have in common now that we have categorized them into these two groups? And I think there is very loud things about the accurate predictions. That you almost never see in the inaccurate predictions. And very loud things amongst all the inaccurate predictions you never see in the accurate predictions. I think that these two categories of predictions actually look very different.
Okay. Okay. So the things that turned out to be moral panics versus the things that turned out to be accurate predictions of a future danger. People were already dying in small ways. With the real ones.
Yes. Every single time it has been an accurate prediction. Whether it's the aids or it's the. Small batches of people dying. It's about to go down.
Yeah. But we haven't had a single AI turn rogue and start murdering people yet. Like, we've had machines break in factories. I think like a robotic arm accidentally killed someone in a Tesla factory. But it wasn't like, malicious.
It wasn't like trying to kill the person. There are factory deaths all the time, and of course, fewer today than ever before. Probably. Yes. This marks it clearly in the moral panic category.
B
Okay. Okay. Ones that turned out to be wrong are very often tied to a fear of people being replaced by machines. Yeah. Technology, it seems that's the biggest theme is this new invention is going to ruin everything.
A
So historically we've seen that has never happened. Or cultural destruction. That's the other thing that's often claimed, which is also something we see around AI, apocalypticism. Fears are on cultural destruction and jobs being taken. And then here.
And people can be like, what? But jobs are being taken. Yes, but more jobs are created. At the end of the day, what's always happened in a historic context. Yes.
Like, photography took jobs away from artists. But no one's now as, like, photography as, like a moral evil or something like that. Here's another one. The fake ones, the ones that turn out to be wrong are usually related to technology or science. Yeah.
The ones that are right are usually related to medical concerns. Are actually always related to medical concerns or geopolitical prediction. Yeah, I was getting the. That. It's.
B
It's an infection either of, like, people or outside groups like the sea peoples or Europeans or whatever. Or literally a disease coming in seems. To be the issue. And here is the final nail in the coffin for his. You cannot learn anything from this.
A
Different cultural groups react to fake predictions with different levels of susceptibility and panic. By that, what I mean is that if you look at certain countries and cultural backgrounds, they almost never have these moral panic freakouts when it is inaccurate. Okay. Okay. So you're saying, like, you look at China and China's not shitting a brick about this thing.
Yeah. China is not very susceptible to moral panic. Most east asian countries are. So India isn't particularly susceptible. China isn't particularly susceptible.
Japan isn't particularly susceptible, and South Korea isn't particularly susceptible. They just historically have not had. And I remember I was talking with someone once, and then they came up with, like, some example of a moral panic in China. And then I looked it up and it, like, wasn't true. So if you're like, no, here's some example of when this happened in China historically, like the Boxer rebellion or something like that, I'm like, no, that was not in a moral panic.
That was a. Or the opium wars. Like, the opium wars were an actual concern about something apocalyptic people. It was a batches of people dying issue, which was a real problem.
So when you and certain cultures are hyper susceptible to more to apocalyptic movements specifically, they spread really quickly within jewish communities and within christian communities. Those are the two groups that are most susceptible to this. Yeah.
Here'S the problem. So you get the problem across the board here, which is the places having the moral panics today around AI apocalypticism are 100% and nearly exclusively the communities that were disproportionately successful, susceptible to incorrect moral panics on a historic basis. White Christians and Jews, Christians and Jews. You just don't see big AI, apocalyptic movements in Japan or Korea or China or India. They're just not freaking out about this in the same way.
And keep in mind, I've made the table very big here. It's not like I'm just saying, oh, you're not seeing it in Japan. You're not seeing it in half the world that's not prone to these types of apocalyptic panics. Okay. That is really big evidence to me.
Okay. That's .1.2 is it has all of the characteristics of the fake moral panics, historically speaking, and none of the characteristics of the accurate panics, historically speaking. But I'm wondering if you're noticing any other areas where there are congruence in the moral panics that turned out accurate versus the ones that didn't.
B
The biggest theme to me is just invasion versus change. Like a foreign agent entering seems to be a bigger risk than something fundamentally changing from a technological standpoint, which is not what I expected you to come in with. So this is surprising to me. Yeah. Okay.
A
So if we were going to modify AI risk to fit into the mindset of the moral panics that turned out to be correct, like the apocalyptic claims that turned out to be correct, you would need to reframe it. You'd need to say something like, and this would fit correct predictions, historically speaking. If we stop AI development and China keeps on with AI development, China will use the AI to subjugate us and eradicate a large portion of our population. Yeah, that would have a lot in common with the types of moral predictions or moral panic predictions that turned out accurate. AI will take people's jobs, AI will destroy our culture, or AI will kill all people.
These feel very much like the historic incorrect. But I think you are underplaying something, which is that while these technological predictions luddites freaking out about the industrial revolution, people freaking out about the printing press, it did not lead to the fall of civilization as expected. It did lead to fundamental changes. And AI will absolutely lead to fundamental changes in the way that we live and work. I don't argue.
Have we ever argued that AI is not going to fundamentally change human civilization? We have multiple episodes on this point. Okay? Yeah. We say it's going to fundamentally change the civilization.
It's going to fundamentally change the economy. It's going to fundamentally change the way that we even perceive humanity and ourselves. None of that is stuff that we are arguing against. We are arguing against the moral panic around AI killing everyone and the need to delay AI advancement over that moral panic. Yeah.
B
And that is fair. And the point here being is you can actually learn something by correlating historic events. And it is useful to correlate these historic events to look for these patterns, which I find really interesting in terms of. So. And it makes sense, like with the industrial revolution, like with the spinning wheel, whenever you see something that is going to create, like, an economic and sociological jump for our civilization, there are going to be a Luddite reaction movement to it.
A
Never historically has there been a technological revolution without some large luddite reaction. The only, and it's not even that weird, because actually, if you look historically, luddite movements often really spread well within the educated bourgeoisie that was non working. That that group just seems really susceptible to Luddite panics. But I can tell you what, growing up, I never expected the effect of altruist community and the rationalists and the singularity community to become sort of luddite cults like that. I also never expected many so called rationalists to turn to things like energy healing and crystals.
B
But here we are. So here we are. That's why we need to create a successor movement. And I really personally do see the pronatalist movement, because I look at the members of the movement and like at the pronatalist conference that we went to, and this happening again this year, a huge chunk of the people were former people in the rationalist community and disaffected rationalists. And the young people I met in the movement where exactly the profile of young person, as I said, it's a hugely disproportionately autistic movement who when they were younger or when I was younger, would have been early members in the rationalist EA movement.
A
And so we just need to be aware of the susceptibility of these movements to one, mystic grifters who, like you had with our episode. If people want to watch it, it's on the cult leverage or two, if they're not mystic grifters on forms of apocalypticism. And I should note, and people should watch our episode, they're like, when you talk about the world fundamentally changing because of fertility collapse, like how is that different from apocalypticism? We have an episode on this, if you want to watch it. But the gist of the answer is we predict things getting significantly harder and economic turnover, but not all humans dying.
The nature of our predictions, and this is actually really interesting and it's something from a historic perspective, in the wrong movements. The nature of our predictions say you need to, if you believe this, adopt additional responsibilities in terms of the fate of the world, in terms of yourself. Having kids is a huge amount of work. AI apocalypticism allows you to shirk responsibility because you say the world's going to end anyway. I don't really need to do anything other than build attention.
That is, build my own reader base or attention network towards myself, which is very successful from a memetic standpoint at building apocalyptic panic. Because if somebody donates to one of our charities, 90% of the money needs to go to making things better. You donate to an AI apocalypticism charity, most of the money is just going to advertising the problem itself, which is why these ideas spread. And that's also what you see historically is panic. My concern too is a lot of these projects that have been funded as part of x risk philanthropy, no one's, the only people consuming them are the EA community.
B
So these things aren't reaching other groups. And we saw this also at one of the dinner parties we hosted. One of our guests was the leader of one of the largest women's networks of AI developers in the world. And a bunch of other people there were literally working in AI alignment. This woman had never even heard the term AI alignment.
These people working in AI alignment are not reaching out to people actually working in AI. They are not effectively reaching, they're also not reaching audiences of just broader people. They're all in this echo chamber within the EA and rationalist community. And they're not actually getting reach. So even if I did believe in the importance of communicating this message, I wouldn't support this community because they're not doing it.
They're not doing it. Yeah. What they need is to create a network that funds attractive young women to go on dates with people in the AI space, to just live in areas where they are and try to convince them of it as an issue, but they won't. But a lot of people in it. Here's another thing that I noticed that's cross correlating between the two groups, actually.
I would love to see you apply for a grant with one of those x risk funds of just. I will hire thirst traps to post on Instagram and to be on, like, to be on OnlyFans and just start like, no, no. Not only fans give them cities, because there's some cities where these companies are based and where a lot of. No, no, for sure. And date them.
Yes, for sure. But I just. I love this idea of using women. But here's the other thing that's cross correlated across all of the incorrect panics historically, which I find very interesting, and I didn't notice every one of the correct panics had something specific and actual that you could do to help reduce the risk, whereas almost all of the incorrect moral panics, the answer was just stop technological progress. That's how you fix the problem.
A
So if you look at the correct moral. Black plague, world War two, AIDS, DDT, asbestos, cigarette smoking, native american warnings about Europeans. In every one of those, there was, like, an actionable thing that you needed to do. Like, go start doing removal. Go don't have it sprayed on as many crops.
AIDS. Oh, safer sex policy, stuff like that. However, if you look at the incorrect things, what are you looking at? Like, the splitting of the Higgs. You just need to stop technological development.
Industrial revolution. You just need to stop technological development. The speed of trains. You just need to stop technological development. Reading panic.
You just need to stop technological development. Radio. You just need to stop technological development. Printing press. You just need an important point with all these.
B
And you could argue, actually, that this was an issue with nuclear as well. In fact, this discussion was had with nuclear, is that there was this one physicist who, one, believed that nuclear wouldn't be possible, but two also was very strongly against censorship because a lot of people were saying, we have to stop this research. Too dangerous. And he just strongly believed that you should never, ever censor things in physics if that's not acceptable. And then we did ultimately end up with nuclear weapons and that that is a real risk for us.
But I think the ARG, the larger argument with technological development is someone's going to figure it out, and to a certain extent, it's going to have to be an arms race, and you're going to have to hope that your faction develops this and starts to own the best versions of this tech in a game of proliferation before anyone else. There's no, if you don't do it, someone else will. Yeah. And that's the other. Now, I haven't gone into this because this isn't what the video is, but recently I was trying to understand the AI risk people better as part of Lemon Week, where I have to engage really heavily with steel Manning, an idea I disagree with.
A
And one of the thing I realized was a core difference between the way I was approaching this intellectually and they were, is I just immediately discounted any potential timeline where there was nothing realistic we could do about it. An example here would be in a timeline where somebody says, AI is an existential risk, but we can remove that risk by getting all of the world's governments to band together and prevent the development of something that could revolutionize their economies. Because that won't happen. No, it's just stupid. It's a stupid statement.
Of course we can't do that if we live in a world where if we can't do that, AI kills us in every timeline. I don't even need to consider that possibility. It's not meaningful on a possibility graph, because there's nothing we can do about living in that reality. Therefore, I don't need to make any, any decisions under the assumption that we live in that reality. It's a very relaxing reality.
Yeah. And that's what gets me, is I realized that they weren't just immediately discounting impossible tasks, whereas I always do like, when people are like, you could fix pronatalism if you could give a half million grant to every parent. I'm like, cool. But we don't live in that reality. So I don't consider that.
B
Yeah. They're like, yeah, government policy interventions could work. You need a half million. I'm like, yeah. And people are like, technically we could economically afford it.
A
And I go, yes, but in no realistic governance scenario could you get that passed in anything close to the near future. I think it's just an issue of how I judge timelines to worry about and timelines not to worry about, which is interesting. Anyway. Love you to death. It'd be interesting if Scott watches this.
We chat with Scott. I'm friendly with him, but I also know that he doesn't really consume YouTube, so I don't know if this is something that will get to him, but it's also just useful for people who watch this stuff. And if you are not following Scott's stuff, you should be or you are out of the cultural zeitgeist. That's just what I'm going to tell you. He is certainly still a figure that is well more respected than us as an intellectual.
And I think he is a deservingly respected intellectual. And I say that about very few living people. I know very few living intellectuals where I'm like, yeah, you should really respect this person as an intellectual because they have takes beyond my own occasionally. Yeah, he is wise, he is extremely well read, he is extremely clever and surrounded by incredibly clever people. And then beyond that, I would say he disagrees with us on quite a few things.
B
So yeah, we have a lot to learn from him, actually. Question, Simone. Why do you think he didn't consider what I can just laid out and think is a fairly obvious point, that you should be correlating these historical movements?
I just. I think that you have a way of looking at things from an even more cross disciplinary and first principles way than he does sometimes. So you both are very cross disciplinary thinkers, which is one reason why I like both of your work a lot. But I think in the algorithm of cross disciplinary thinking, he gives a heavier weight to other materials and you give a heavier weight to just first principles reasoning, and that's how you come to reach these different conclusions. Yeah, I'd agree with that.
A
Yeah. And I also think another thing he gives a heavier weight to, like, when I disagree with him most frequently to things that are culturally normative in his community, he gives a slightly heavier weight to. That's actually, you are very similar in that way, and that your opinion is highly colored by recent conversations you've had with people and recent things you watch. So it's something that both of you are subject to. I would say that maybe you may be even more subject to it than he is because you interact with people less than he does on a regular basis.
True, he's much more social than us. He'S much more social than you. But you are extremely colored by what you're exposed to, so you're not exempt from this. But that is true. Yeah.
Actually, I would definitely admit that, like a lot of trans stuff recently is just because I've been watching lots and lots of content in that area, which has caused YouTube to recommend more of it to me, which has caused sort of a loop on top of historically, I wouldn't have cared about that much. One thing I'll just end with, though, and I'm still not even finished reading this, but Leopold Aschenbrenner, I don't know actually how his last name is pronounced, but he is like in the EA x risk. I think he's even pronatalist. No, he is. He famously one of the first people to talk about pronatalism.
He just never put any money into it, even though he was on the board of FTX. He published a really great piece on AI that I now am using as my mooring point for for helping me think through the implications of where we're going with AI. Seeing how steeped he is in that world and how well he knows many of the people who are working on the inside of it, getting us closer to AGI. I think he's a really good person to turn to in terms of his takes. I think that they're better more than reality, and they're also more practically oriented.
B
He wrote this thing called situational awareness the decade ahead. You can find it at situational awareness AI. And if you look at his Twitter, if you just search Leopold ashenbrenner on Twitter, it's like his Twitter URL link. He's definitely promoting it. I recommend reading that.
In terms of the conversation that I wish we were having with AI, he sets the tone of what I wish we were talking about, like how we should be building out energy infrastructure, the immense security loopholes and concerns that we should be having about, for example, foreign actors getting access to our algorithms and weights, and the AI that we're developing right now because there's very little security around it. So yeah, I think that people should turn to his write up. That's a great call to action. And I was just thinking I had another idea as to why maybe I recognize this when he didn't. Because this is very much like me asking why did somebody smarter than me or who I consider smarter than me not see something that I saw as like, really obvious and he didn't include and like discount in his piece?
A
Of course, you would cross correlate the instances of success with the instances of failure in these predictions. I suspect it could also be that my entire worldview in philosophy, and many people know this from our videos, comes from a memetic cloud first perspective. I am always looking at the types of ideas that are good at replicating themselves and the types of ideas that aren't good at replicating themselves when I am trying to disturb it. Why large groups act in specific ways or end up believing things that I find off or weird, like, how could they believe that? And that led me to, in my earliest days, become, as I've mentioned, like really interested.
Was cults. Like, how do cults work? Why do religions, like, how do people convince them things of stuff that, to an outsider, seem absurd? And so, when I am looking at any idea, I am always seeing it through this memetic lens first. And I think when he looks at ideas, he doesn't first filter it through a memetically.
Why would this idea exist before he is looking at the merits of the idea? Whereas I often consider those two things as of equal standing to help me understand how an idea came to exist and why it's in front of me. But I don't think that he has this second obsession here. And I think that's probably why. Maybe.
B
Yeah, yeah, yeah. But I like it when people come to different conclusions because it's always something in between there that I find the value. I don't know if that's helpful. I actually think that's an unhelpful way to look at things. I think you shouldn't look for averages, but you can.
I don't look for averages. I find stuff. I think when you look at what is different, you find interesting insights. It's not an average of the two. It's not a mean, a median, or a mode.
It is unique new insights. It's more about emergent properties of the elements of disagreement that yield entirely new and often unexpected insights. Not something in between, not compromise. You are a genius, Simone. I am so glad.
A
As the comments have said, you're the smarter of the two of us. And I could not agree more, and I will hit you with that every time now, because I know this drives me nuts. You know that you're the smarter one, that even our polygenic scores for intelligence show that you're the smarter one. Yeah. We went through our polygenic scores recently, and one of the things I mentioned in a few other episodes is that I have the face of somebody who, you know, when they were biologically developing, was in a high testosterone environment.
When contrast with Andrew Tate, like that's where I often talk about it, is he has the face of somebody who grew up in a very low testosterone environment. Believe it or not, when I was going through the polygenic markers, I came up 99% on testosterone production in terms of some of the top 1% of the population in terms of just endogenous testosterone production. So, yeah, of course, when I was developmental, I was just flooded in this stuff. That's why I look like this. And then you were, what, in, like, 1% of pain tolerance that I was in?
B
99%. Yeah, you were like, 99% for pain tolerance. That I would explain. No, I like it. But being high testosterone, but actually feeling pain and just being like, nah, not going to engage in those scenarios.
Yeah, it's probably a good mix of noping out of there. It's a good mix of being tough but noping out the moment it becomes dangerous. Yes. High risk, but good survival instinct. Very good.
A
Yeah. Especially because you also have fast twitch muscle, which I don't. When you. Nope. Out of a place about real fast.
Has this joke about me being able to, like, bamf out of a situation whenever, like, Nightcrawler. Whenever something dangerous happens, I'm 20ft away somewhere else. Yeah. Like, I turn and he's just gone. And, like, a car is hurtling toward me.
You are so slow. You actually remind me of, like, a sloth. I need to get better at yanking you out of the way. And you literally have to pull me. Because I'm, like, when cars are coming at us.
Because she started crossing the road and she, like, didn't expect she cannot, like, speed up above a fast walk. But I hate moving so quickly. I'm also, like, contemplating, do I want to die or should I try to move? You really come off that way. Yeah.
Nightcrawl. Have a bamboo. We're gonna bam. Back, grab you.
B
God, I'm gonna die. Yeah. I love you there. I love you so much. Simone, you're amazing.
A
Hey, I would love to get the slow cooker started on the tomatoes and meat that I got, but you still. Have about two days worth of the other stuff. Yeah, but it's easier to just freeze this stuff if I do it all at once now and then. I can also. Overnight.
I can also leave it cooking for a few days. I can do that. Do I have time to make biscuits or muffins? Cornmeal muffins? Yeah.
B
If I go down right now, I can make cornmeal muffins. Would you like meal muffins? I'm okay with that. Yeah. Okay.
A
You're so nice. Cornmeal goes great with slow cooked beef. And you're still gonna have the slow cooked beef that you made earlier this week, right? I'm heading down. She's asleep on my lap.
B
I don't want to look. She's like, Blair, but I'll get mommy. She loves sleeping. I love you so much, Simone. You're a perfect mom.
I love you too, Meltha. And you got to get that pouch so you can get that pocket on, okay? Want me to order it right now? Oh, no. I need to contemplate whether or not we should spend money on that or new carbon monoxide detectors.
A
No, you're getting the new carbon monoxide detectors. Just let me get this for you as a gift. Okay? Here, I'm getting it right now. It's $19.
B
I'll get it with my money. Okay? I just got it. No, it was my money. I'm the one who's demanding that you get a pocket because I'm so freaking annoyed at you walking around without a pocket.
All right, Malcolm? It is annoying, Simone. It causes me dissatisfaction. Okay, I will see you downstairs with my corn muffin hands, ready to go. Okay, love you.
A
Bye. I guess you could call it the Dunning Kruger trap, where, you know, the Dunning Kruger effect is where people who know less about something feel more confident about it. Right. What happened? He just noped right out of there.
B
Is it a bug? Was it a mouse?
A
It was a beer. It was the beer that you knocked over yesterday. Oh, the one that, no, Titan knocked it off the table. Oh, and you're like, you better not open that one. That's what just happened.
Oh, God. Okay. Whoops. So I said the Dunning Kruger effect, whereby people who know less about something feel more confident about it. By the way, Dunning Kruger effect does not replicate.
B
And then anyway, still people are familiar with it. And then people who know more about something often say that they know less. And I think that there gets to be a certain point where when you know a ton about something, you just start to become very uncertain about it and you're not really, really to take any stance, which is something I saw a lot in academia, where the higher up in academia I got, the more the answer was always, it depends. Instead of that is whatever you're talking about has nothing to do with any of the points I'm going to make.
See if you smile when daddy appears on the screen. Daddy, look at that. It's daddy.
A
She doesn't see. She doesn't see. She's. I haven't gotten her eyes on the screen. She's got a screen.
Do you recognize me at all? I don't know if they can recognize things on screens in the same way that adults can. I don't know either. Yeah, she doesn't seem to be focusing on it. So, yeah, she doesn't.
She can't see me. Oh, well, really, it's okay. We love you. Anyway, I will get us started here. Oh, I will pull this aside.
How could you tell that it was bad at creating websites, by the way? Because it, after you buy a domain, will, like, literally take the names of your domain, like the words within it, and then assume that based on. Okay, for example, because I got pragmatistfoundation.org, comma, they're like, oh, you're a pragmatic foundation, and you're dot. So you're a nonprofit. And so here's a nonprofit website for a foundation that likes pragmatism.
B
And then it made up copy based on that and had a picture of kids sitting at desks and with something like creating solutions that are pragmatic, which is not terribly far off, but it. Doesn'T sound so bad. No, it's not so bad. It's just in case you are wondering, the reason why we're looking at buying websites right now is when we needed to get the.org for the Pragmatist foundation because people were emailing the wrong address. Because we have.com for that.
A
But also, I've been thinking about building a website for the techno puritan religion and seeing if I can get it registered as a real religion, which would be pretty fun, especially if I'm able to put religious wear in there. Like, you always have to be armed. It was to see if you can get religious exceptions, for which I do believe there is a religious mandate for concealed carrion stuff. That would be interesting from a legal perspective. It'd be funny if we had, like, a religious mandate for always having to carry ceremonial sloths with us.
B
It's my religious sloth. You can't let me not go to your restaurant wearing it. You want to enshrine specific rights that people would want. I think you can do stuff around sorts of data privacy and stuff like that. Makes sense within a religious context to us, but also provide a legal tool to people who want the access to this stuff.
A
That could be interesting, which also helps the religion spread. So that'd be fun. Yeah. All right, so I am opening this up here.
B
Doing wiggles. Okay. You better not let her wiggle. I better not. She's full of all the wiggles.
A
She's full of all the wiggles.
B
She's full of all the wiggles.