#188 - Matt Clancy on whether science is good

Primary Topic

This episode of the 80,000 Hours podcast delves into the potential benefits and risks of accelerating scientific progress, focusing on its impact on society and the future.

Episode Summary

In this intriguing episode, host Luis Rodriguez and guest Matt Clancy, a research fellow at Open Philanthropy, explore whether investments in speeding up scientific progress are beneficial or detrimental to society. They discuss the complex relationship between scientific advancement and its potential to both solve and create problems. Clancy highlights scenarios where rapid scientific advancements could lead to existential risks, such as advanced biotechnology falling into the wrong hands. They also touch on the challenges of predicting long-term consequences of accelerated science and the philosophical and practical aspects of funding science for long-term gains.

Main Takeaways

  1. Accelerating Science Comes with Risks: Clancy discusses how speeding up science might not only accelerate beneficial discoveries but could also hasten the development of dangerous technologies.
  2. The Complexity of Predicting Science Impact: The episode covers how it's challenging to forecast the long-term effects of accelerated scientific progress due to the unpredictable nature of technological applications.
  3. Meta-Science and Its Value: The potential of meta-science (science about improving science) is discussed, with Clancy advocating for investments in improving scientific institutions to increase the overall effectiveness of scientific research.
  4. Philosophical Considerations: The conversation includes a philosophical look at whether all scientific progress is inherently good, considering historical examples where technology has had both positive and negative impacts.
  5. Practical Approaches to Innovation Policy: Practical aspects of managing innovation policy are examined, focusing on how to potentially guide scientific progress in directions that maximize public good while minimizing risks.

Episode Chapters

1: Introduction and Overview

Host Luis Rodriguez introduces the episode's theme and guest, Matt Clancy. They set the stage for a discussion on the complexities of scientific progress.

  • Luis Rodriguez: "Welcome to a new episode where we explore the intricate balance of advancing science responsibly."

2: The Risks of Accelerated Science

Clancy explains scenarios where faster scientific progress could lead to significant risks, using examples from biotechnology.

  • Matt Clancy: "Accelerating science could inadvertently advance technologies we aren't ethically or practically ready to handle."

3: The Role of Meta-Science

Discussion on how meta-science can enhance the efficacy of scientific research and its potential high return on investment.

  • Matt Clancy: "Investing in meta-science could yield substantial returns by making all science more effective."

4: Ethical and Philosophical Implications

They delve into the moral implications of scientific advancement and the responsibility of scientists and policymakers.

  • Matt Clancy: "We must consider the ethical dimensions of accelerating science, not just the technical ones."

5: Concluding Thoughts

The episode wraps up with thoughts on how society can better prepare for the dual-edged nature of scientific progress.

  • Luis Rodriguez: "Thank you for joining us on this thoughtful journey through the potential futures shaped by our scientific choices."

Actionable Advice

  1. Support Research in Meta-Science: Engage with and fund research aimed at improving the efficiency and integrity of scientific processes.
  2. Promote Ethical Science Practices: Advocate for ethical guidelines that keep pace with technological advancements.
  3. Educate on the Dual Uses of Science: Increase public awareness about the potential dual uses of scientific discoveries.
  4. Develop Robust Policy Frameworks: Work towards creating robust policy frameworks that can handle the rapid pace of scientific innovation.
  5. Encourage Interdisciplinary Collaboration: Foster collaboration between scientists, ethicists, and policymakers to ensure well-rounded approaches to scientific development.

About This Episode

"Suppose we make these grants, we do some of those experiments I talk about. We discover, for example — I’m just making this up — but we give people superforecasting tests when they’re doing peer review, and we find that you can identify people who are super good at picking science. And then we have this much better targeted science, and we’re making progress at a 10% faster rate than we normally would have. Over time, that aggregates up, and maybe after 10 years, we’re a year ahead of where we would have been if we hadn’t done this kind of stuff.

"Now, suppose in 10 years we’re going to discover a cheap new genetic engineering technology that anyone can use in the world if they order the right parts off of Amazon. That could be great, but could also allow bad actors to genetically engineer pandemics and basically try to do terrible things with this technology. And if we’ve brought that forward, and that happens at year nine instead of year 10 because of some of these interventions we did, now we start to think that if that’s really bad, if these people using this technology causes huge problems for humanity, it begins to sort of wash out the benefits of getting the science a little bit faster." —Matt Clancy

In today’s episode, host Luisa Rodriguez speaks to Matt Clancy — who oversees Open Philanthropy’s Innovation Policy programme — about his recent work modelling the risks and benefits of the increasing speed of scientific progress.

People

Matt Clancy, Luis Rodriguez

Companies

Open Philanthropy

Books

None mentioned

Guest Name(s):

Matt Clancy

Content Warnings:

None

Transcript

Matt Clancy
Say you have a billion dollars, a billion dollars per year. You could either write it to science, or you could spend it improving science, trying to build better scientific institutions, make the scientific machine more effective. And just making up numbers say that you could make science 10% more effective if you spent a billion dollars per year on that project. Well, we spend like $360 billion a year on science. If we could make that money go 10% further and sort of get 10% more discoveries, it'd be like we had an extra 36 billion, billion dollars in value.

And if we think each dollar of science generates $70 in social value, this is like an extra $2.5 trillion in value per year from this $1 billion investment. That's like a crazy high ROI. Like 2000 times as good. And that's the kind of. Yeah, that's like the calculation that underlies why we have this innovation policy program and why we're like, why we think it's worth thinking about this stuff, even though there could be these downside risks and so on, instead of just doing something else.

Luis Rodriguez
Hi, listeners. This is Luis Rodriguez, one of the hosts of the 80,000 hours podcast. In today's episode, I speak with Matt Clancy, an economist and research fellow at Open philanthropy. I've been excited to talk to Matt for a while because he's been working on what seems to me like a really important issue. One, how to accelerate good scientific progress and innovation, and two, whether it's even possible to accelerate just good science, or whether by accelerating science, you accelerate bad stuff along with good stuff stuff.

We explore a bunch of different angles on those questions, including how much can we boost health and incomes by investing in scientific progress scenarios where accelerating science could lead to existential risks, such as advanced biotechnology being used by bad actors, how scientific breakthroughs can be used for both good and bad, and often enable completely unexpected technologies, which is why it's really hard to forecast the long term consequences of speeding it up. Matts all things considered view on whether we should invest in speeding up scientific progress, even given the risks it might pose to humanity, plus non philosophical reasons to discount the long term future. At the end of the episode, we also talk about reasons why Matt is skeptical that AGI could really cause explosive economic growth. Okay, without further ado, I bring you Matt Clancy today. I'm speaking with Matt Clancy, who runs open Philanthropy's innovation policy grant making program and who's a senior fellow at Institute for Progress.

He's probably best known as the creator and author of New Things under the sun, which is a living literature review on academic research about science and innovation. And I'm just really grateful to have you on. I'm really looking forward to this interview in particular. Thanks for coming on, Matt. Yeah, thank you so much for having me.

I hope to talk about why you're not convinced we'll see explosive growth in the next few decades, despite progress on AI. But first, you've written this massive report on whether doing this kind of meta science, this kind of thinking about how to improve science as a whole field, is actually net positive for humanity. So you're interested in whether a grant maker like Openfill would be making the world better or worse by, for example, making grants to scientific institutions to replicate journal articles. And I think at least for some people, myself included, this sounds like a pretty bizarre question. It seems really clear to me that scientific progress has been kind of the main thing making life better for people for hundreds of years now.

Can you explain why you've spent something like the last year looking into whether these meta science improvements that should bring us better technology, better medicine, better kind of everything might not be worth doing at all? Yeah, sure. And to be clear, I was like the same when I started on this project. I maybe started thinking about this like a year and a half ago or so. I was like, obviously this is the case.

Matt Clancy
It's very frustrating that people don't think this is a given. But then I started to think like, taking it as a given seems like a mistake. And in my field, economics of innovation, it is sort of taken as a given. Science tends to almost always be good progress and technological innovation tends to be good. Maybe there's some exceptions with climate change, but we tend to not think about that as being a technology problem.

It's more like a specific kind of technology as bad. But anyway, let me give you an example of a concrete scenario that sort of is the seed of beginning to reassess and think it's interesting to interrogate that underlying assumption. So, you know, suppose we make these grants, we do some of those experiments I talk about, like we fund, we discover, for example, that if you. I'm just making this up, but like, you know, we give, we give people super forecasting tests when they're doing peer review, and we find that you can identify people who are super good at picking science. And then, you know, we have this like, much better targeted science and like, we're making progress at like, I don't know, a 10% faster rate, you know, than we normally would have.

Over time, that aggregates up. And maybe after ten years, we're like a year ahead of where we would have been if we hadn't done this kind of stuff. Now, suppose in ten years, we're going to discover a cheap kind of new genetic engineering technology that anyone can use in the world. If they order the right parts off of Amazon, that could be great, but could also allow bad actors to genetically engineer pandemics and basically try to do terrible things with this technology. If we've brought that forward, that happens at year nine instead of year ten because of some of these interventions we did.

Well, now it starts to think like, well, if that's really bad, if these people using this technology causes huge problems for humanity, it begins to sort of wash out the benefits of getting the science a little bit faster. And in fact, it could even be worse. Because what if in year ten, we were also going to invent? You know, that's when AGI happens, for example, we get a super AI, and when that happens, the world is transformed. We might discuss later why I have some skepticism that it will be so discreet, but I think it's a possibility.

And so if that happens, maybe if we invented this cheap genetic engineering technology, after that, it's no risk. The AI can tell. Oh, yeah, here's how you mitigate that problem. But if the. If it comes available before that, then maybe in the.

We never get to the AGI because somebody creates a super terrible virus that, you know, wipes out 99% of the population. We're some kind of, ya dystopian, apocalyptic future or something like that. All right, so anyway, that's the sort of concrete scenario. And, you know, your instinct is to be like, come on. But, like, you start to think about it, and you're like, it could be.

We invented nuclear weapons. Those were real. They can lead to a dystopia, and, like, they could end civilization. There's no reason that science has to always play by the same rules and, like, just always be good for humanity. Things can change.

And so I started to think it would be interesting to spend time interrogating this kind of assumption and see if it's a blind spot for my field and for. For other people. Cool. Yeah, there's a lot there, and I want to unpack a few pieces of it. Yeah, I guess.

Luis Rodriguez
One, I think when I first encountered the idea that someone out there as an individual might realize that they could use some new genetic engineering technology to try to create some, I don't know, civilization ending pandemic, I was like, who would want to do that? And, like, surely one person can't do that. But then I interviewed Kevin Esvelt and got convinced that this was way more plausible than I thought. And so we'll link to that episode if you're just like, what? What are you guys talking about?

Um, I recommend listening to that. Yeah, maybe the other thing is, just like, I do find the nuclear weapons example really helpful, because it does just seem really plausible to me that had science been a little bit further behind that, all of the work that went into making nuclear weapons happen when they did, uh, might not quite have been possible. It was already clearly a huge stretch, and clearly, some countries did try to make them, but kind of failed to do it at the same pace as the Americans did, and maybe a year of better science would have made the difference for them. And all of these possible effects are really hard to weigh up against each other and make sense of and decide whether the world would have been better or worse if other countries had nuclear weapons earlier or if Americans had nuclear weapons later. But it does kind of help me, I guess, pump that intuition of having science a year faster.

Yeah. Than we might have otherwise can, in fact, really change the way certain dangerous weapons technologies looks, what it looks like and how it plays out. Does that all feel like, I don't know, getting at the right thing? Yeah, I think that's the kind of core idea, is that science matters. There are technologies that cannot be invented until you have the underlying science understood.

And some of them are bad. Yeah. And science is in some sense, like, morally neutral. It just gives you the ability to do more stuff, and that can be. It has tended to be good, but it doesn't have to be.

Yeah. Okay. So I guess I think there are some reasons people still might be skeptical of this whole question, but maybe before we get to them, I'm also just a bit unsure of how action relevant this all is. Like, if it turned out that you, you know, you wrote this whole report and it concluded that actually accelerating science is kind of net negative, what would open philanthropy do differently? Like, I imagine you wouldn't stop funding new science, or maybe you would, or at the very least, I imagine you wouldn't, like, go out and start trying to, like, make peer review worse.

Matt Clancy
Yeah. So I started this project. Yeah. Because people at Openfill, I joined Open Phil in November 2022 to lead the innovation policy program. And some people were.

There was a number of people who were concerned about this, and it affects kind of the potential direction that the innovation policy program should go. We're not going to as you say, try to fund thwarting science in any way, but there's finite resources and there are different things you can do. So one option would be we wind down. We don't pursue stuff that's going to accelerate science just across the board, like finding new ways to do better peer review or so on. Instead, we just, instead of making grants to those organizations, we make them to other organizations that might have nothing to do with science at all, like farm animal welfare or something like that.

So that's one possible route you could go. Another route is you could say, all right, well, we take away that exciting science is not good, but you can pick, you know, that's only one option. There are other options available to you, including trying to pick and choose which kinds of science to fund. And some you might think is better than others. And so maybe we should work on developing tools and technologies or social systems or whatever that align science in a direction that we think is more.

So you could basically focus less on the speed and more on how does science choose wisely what to focus on and so forth. So those are kind of different directions you could go. It could be just stuff outside of science, or it could be trying to sort of improve our ability to be wise in what we do with science. Okay, fair enough. Yeah.

Luis Rodriguez
Let's then get back to whether this is even a real problem, because I think part of me is still kind of skeptical. I think the thing that makes me most skeptical is just like, it does seem like there's clearly a range in technology. Some technology seems just, like, really, really clearly only good. I don't actually know if I can come up with something off the top of my head, but, like, maybe vaccines, arguably. And some technology seems like much more obviously dual use.

So, like, has some good applications, but also can clearly have some bad ones, like genetic engineering. Why are people at open philanthropy worried that you can't get those beneficial technologies without getting the dangerous ones by just, like, thinking harder about it? So I think that, to an extent, we do think that that is possible. And we have programs that work on this in specific technology domains, like biosecurity or AI, that make decisions that are kind of like the kind you're talking about, where we think giving a research grant to this would be good, and giving a research grant to that would maybe not be good. So we're not going to do that kind of thing.

Matt Clancy
There's, as we'll sort of talk about, there's, like, a lot of value on the table from just improving overall science. Like, if you could make overall science a bit more effective and if science is on average good, then you should totally do that. And, yeah, I think that's, that's the core argument is just like, that's, that's always an option to just pick and choose. But we want to explore whether interventions that can't be targeted, uh, are also worth pursuing, even though they came to target. So, like, you know, high skilled immigration, like, we're not going to, you know, it's not going to be at a level of granularity if there's some kind of change to legislation that's like, you know, if your research project is approved by the NSF is good, then you can get a green card and you can.

It's just going to be like, you know, if you meet certain. Okay. Characteristics and, you know, that's going to sort of be, you know, can improve the efficacy of science across the board. Okay. Yes, you can try to primarily fund safe and valuable science, but if you want to do any of this, like, really high level, general, make science as a field just like, be better, be more efficient.

Luis Rodriguez
Yeah. Create more outcomes at all. You can't make sure all of those things trickle down into, like, the right science and not the bad science. That's just, like, not the level you're working on. Yeah.

Matt Clancy
And I think that, like, also at a higher level, there are limits to how well you can forecast stuff. So there's this famous essay by John von Neumann called can we survive technology? And he's writing in kind of the shadow of the nuclear age and all this. And he has this sort of interesting quip where he's talking from first hand experience in a way that's really interesting, where he's like, well, it's obvious to anyone who was involved in deciding what technology should be classified and not, and I was not involved in that, but presumably he was during the war or something like that. He's like, it's obvious to those of us who've been in that position that separating the two is like an impossible task over more than like a five to ten year timeframe, because you just can't parcel out.

And so science is like, way, like, really, I think it's going into the unknown and trying to figure out how something works when you don't know what the outcome is going to be and you can't. It's an area where it's very hard to predict what the outcome is going to be. It's not impossible. And we make, try to make those calls, and that's good, but I think it's also important to recognize that there are strong limits to even how well that approach can work. So, for example, consider video games.

Like, video games seem harmless, but the need for video games created demand and incentive to develop chips that can process lots of stuff in parallel really fast and cheaply seems benign. Another thing that seems benign, let's digitize the world's text, put it on a shared web, make it available to everyone, create forums for people to just to talk with each other, Twitter, Reddit, all this stuff. Maybe it doesn't seem as benign, but it doesn't seem like it's an existential risk, perhaps, but you combine chips that are really good at processing stuff in parallel with an ocean of text. Data gives you the ingredients you need to develop large language models, which we probably accelerated the development of those things. And the jury's still out on if these are going to turn out to be good or bad, as you guys have covered before in different episodes.

Luis Rodriguez
Got it. Okay, that makes a bunch more sense to me. Yes, you can forecast whether this, I don't know, very genetic engineering related science project is going to advance the field of genetic engineering. But, like, if you go far enough out, the way different technologies end up being used really does become just like, much more unpredictable. And I don't know, there was a time, 30 to 40 years before we had nuclear weapons that probably, there was some science that ended up helping create nuclear weapons that no one would have said, this.

This is going to be. This is probably going to be used to make weapons of mass destruction. Yeah. And science is full of these sort of serendipity moments where, you know, you find a fungus in the trash can and it becomes penicillin. Teflon was discovered through, like, people working on trying to come up with better refrigerators.

Matt Clancy
You know, teflon was also used as, like, in seals and stuff in the Manhattan project. So, like, I'm not saying the Manhattan project wouldn't have been possible without refrigeration technology, but, you know, it contributed a little bit. Yeah. Interesting. Okay, let's talk about the main scientific field you think might end up being kind of net harmful to society such that we might not want to accelerate it.

Luis Rodriguez
Synthetic biology. What kind of evidence have you found convincing to make you think that there's some chance that synthetic biology could get accelerated by broad scientific interventions in a way that could make science look bad in the end? Yeah, sure. So we can focus on kind of two. Two rationales.

Matt Clancy
And just as an aside, there's lots of other risks that you could focus on. But this, like life sciences is a big part of science, and so, like, the big part of science. And so that's one reason we focused on that. AI risk is like the thing that's in the news right now. But I think what's going on with AI is not primarily driven by academic science.

It's. It's sort of a different bucket. So that's why we didn't focus on this report. Okay, that makes sense. But anyway, turning back to synthetic biology, biology is sort of this nonlinear power where if you can create one special organism, you can unleash it.

And one person, if one person could do that, it would have this massive impact on the world. Why hasn't that ever happened? Well, one reason is because working with frontier life science and cutting edge stuff is really hard and requires a lot of access to specialized knowledge and getting training. It's not enough even to just read the papers you today have typically needed to go to labs, collaborate with people, get trained in things that, in techniques that are not easy to describe with text. And so this need to sort of collaborate with other people and spend a lot of time learning has been a barrier to people sort of misusing this technology.

At least one guardrail against this, and sometimes it gets supplanted. But it's been a thing that makes it hard for people to work. And this is kind of a general science thing, too, is that frontier science requires working as a team more and more. And so it's harder to do bad stuff when you need a lot of people operating in secret with specialized skills and how do you find those people and so forth. The trouble is, like, AI is a technology that helps leverage access to information and helps people learn and helps people figure out how to do stuff that maybe they don't necessarily know how to do.

So there is concern. Like this is the concern. It's not necessarily that AI today is going to be able to help you enough, because, as I said, a lot of the technology, a lot of what you need to know is maybe not written down, but it gets better all the time. People are going to maybe start trying to introduce AI into their workflows in their own labs. It seems not at all surprising that we would develop AI to help people learn how to do their postdoc training better and stuff.

And so maybe this stuff will get in the system eventually. And then if it leaks out, you can, instead of needing lots of time and experience and to work with a bunch of people, maybe an individual or a small group of people can begin to do more frontier work than would have been possible in the past. So that's one. One scenario. Another line of evidence is drawn from a kind of completely different domain or a different area.

There was this forecasting tournament, actually in 2022, called the existential risk persuasion tournament, which turns out to be a really important part of this report. We'll go over it. I bet we'll talk about it more later. But in short, it's like 170 people involved. They forecast a bunch of different questions related to existential risk.

Some of those questions relate to the probability that pandemics of different types will happen in the future. And they ask people, do you think, what's the probability a genetically engineered pathogen will be the cause of the death of more than 1% of humans before the year 2030? Before the year 2050? Before the year 2100? And because they give these different timelines, you can kind of trace out what do people think is the probability of these things happening?

And there's two major communities that participated in this project. They're sort of experienced super forecasters who've got experience in forecasting and have done well in it before. And then there's domain experts, so people with training and biosecurity issues. And both of those groups forecast an increase in the probability of a genetically engineered pandemic occurring after 2030 relative to before. And they both not only see an increase, but an increase relative to the probability of just a naturally arising pandemic occurring.

So you could have been worried that, oh, well, they just think maybe an interconnected world, more stuff is going to happen. But, for example, the superforecasters actually think that the probability of a naturally occurring pandemic will go down, but the probability of a genetically engineered pandemic will go up. That kind of suggests that this is a group that also sees new capabilities coming online, that are going to allow new kinds of hazards to come along. And this was a really big group, 170. They were incentivized to try to be accurate in a lot of different ways.

They were in this online forum where they could exchange all their ideas with each other. I think it's probably the best estimate we have of something that is inherently really nebulous and hard to forecast. But I think, you know, this is the best we've got, and it sort of does see this increase in risk coming. Cool. Yeah.

This whole report, I think, would be way worse if this project had not existed so much. Thanks to the Forecasting Research Institute for putting this on. Yep, yep. Okay, so that aside. So the headline result is basically that these different communities that did these forecasts do think that there's going to be some period of heightened risk.

Luis Rodriguez
And it's, it's kind of different to just increasing risks of natural pandemics because of the fact that we fly on planes more, it can get more people sick or something. It seems like it's something about the technological capabilities of different kinds of actors improving in this field of biology. And so I think in your report, you give this period the name the time of biological perils or the time of perils for short. And so all of that is basically how you're kind of thinking of science as obviously being good, but then also having this cost. And that's.

Yeah, this time of perils. Right. So, yeah, just as a quick, like the short model is basically we're in one era and then the time of perils begins when there's this heightened risk and how long the time of perils, like it kind of could last indefinitely or. Well, there's a discount rate. That kind of helps is one way you can get out of it.

Matt Clancy
But the short answer is like, there's two regimes that you can be in. Yep. Just to make. Yeah. This idea of the time of biological perils a bit more intuitive.

Luis Rodriguez
It's like, I think maybe we've talked enough about how, why we might enter it, why it might start. And obviously it's not a discrete, it's not like one day we're not in the time of perils and the next day we are. But we're going to talk about it like that for convenience. And then we think it might end because of various reasons. Maybe we create the science we need to end it by just like, I don't know, solving disease.

And so we can no longer use biological weapons to do a bunch of damage. Maybe it ends in some other way, but it could end. Or it could just be like, and now we have this technology forever and there's no real solution. And so just like an indefinite time of perils might start at some point, and then from that point on, we just have these higher risks. And that's really bad.

Matt Clancy
Yeah, it's like, this isn't the first time that people have invented new technologies that bad actors use and then they just become this persistent problem bombs. Yeah, arguably, we're just in the time of nuclear perils now. Right? Right. Cool.

Luis Rodriguez
Okay, so that's how to kind of conceptualize that. Let's talk about the benefits of science. You take on the incredibly ambitious task of modeling the benefits of science. How do you even begin to put a number on the value of, I don't know, say, a year's worth of scientific progress? Yeah.

Matt Clancy
We were inspired by this paper by economists Ben Jones and Larry Summers, which put a dollar value on R and D. And to clarify, like, not all R and D is science. Science is sort of like research about how does the world work, what are the natural laws and so forth. A lot of other research is not trying to figure that out. It's trying to sort of invent better products and services.

And so this paper from 2021 by Larry Summers and Ben Jones was like, what if you shut off R and D for one year? That's the thought experiment. They imagine, what would you lose and how much money would you save? So you would save money from that you would normally spend on R and D. It's a little more complicated than that because it also affects the future path of R and D.

But as a first approximation, you lose that year's R and D spending or you save that year's R and D spending, but you lose something, too. And so they want to add up the dollar value of what you lose. And the thing that's clever about it is economists think that all of sort of income growth comes from technology. Like the level of income of a society is ultimately driven by the technology it's using to produce stuff. And the technology that we have available to us, especially in the USA, comes from research and design that is conducted.

And so if you stop the research, you stop the technology progress, you slow your income growth. And so in the US, to a first approximation, gdp per capita income grows about 2% every year, adjusted for inflation. And so if you pause R and D, you don't grow by 2% for one year at some point, maybe with a delay, and then every year thereafter, you grow by 2%. But because you missed that one year, you're always 2% poorer than you sort of otherwise would have been. And so we're going to borrow this idea, but apply it to science.

And so we're going to say, if you pause science, you lose. You don't lose all of economic growth, because unlike R and D, some technological progress doesn't depend on science. And so a big problem we're going to have to solve is what share of growth comes from science, but you lose some, and then you're a little bit poor every year thereafter. But we're also going to have this flip side, which is, since we have this mental framework of this time, of perils, which begins at some year. And we're assuming that the time of perils begins because we discover technologies that, you know, scientific principles that lead to technologies that people can use for bad things.

If we delay by one year, the development of those scientific discoveries, we're going to also delay by one year the onset of that time of perils. And that's going to tend to make us, in this model, like, spend less overall time in this time of perils, basically. So, first approximation, one year less time. So we've got kind of benefits we lose. You know, we would get income from a year of science, and we've got costs, which is that we spend, you know, we get into this time of perils a bit faster, and then we add one third factor, which is because the idea behind the time of perils is that there's these new biological capabilities that people can use for bad reasons.

But obviously, the reason people develop these biological capabilities is to try to help people, to try to, like, cure disease and so forth. And so we want to capture the upside that we're getting from health to. And so we also try to model the impact of science on health. And all of this is possible to sort of put together in a common language, because open philanthropy, we've got grants across a bunch of different areas, and we have these frameworks for how do we trade off health and income benefits. A couple useful caveats.

One, we're going to be using open philanthropy's framework for how do we value health versus income? And other people may disagree, and so that's one thing. And then, of course, you could also argue that there are benefits to science that don't come from just health and income. There might be value to just knowing things about the world, like intrinsic value of knowledge. And we'll be missing that in this calculation, too.

Luis Rodriguez
Right? Okay, so we'll keep that in mind. So that's kind of the framework you're using. Um, so then from there, and this is incredibly simplified, uh, you make basically a model. Well, you make several models with different assumptions, um, where you first consider how much a year's worth of scientific progress is worth in terms of, yeah, kind of increased health and increased incomes for people.

Then you consider how big the population is, I guess both at a single time, but also over time, because it's not just the people alive now that are going to benefit from science. It's also generations and generations to come. But then you also have a discount rate, which is basically saying that we value the benefits people get from science. In the very near future more than we value the theoretical benefits that people in the future might get from science. And basically, that's a lot of things.

And I want to come back to the discount rates because that's actually really, really important before we move on and talk about kind of how you model this all together, I think there's still a piece that at least hasn't been hammered at home enough for me. Because if I'm like, what if we'd skipped last year's science and we were stuck at the level of science of a year ago? I'd be like, meh. So I think there's something about how that the benefits of science over the last year kind of accrue over time. Can you help me understand why I should feel sad about pausing science for a year?

Matt Clancy
Yeah, sure. And you know, as well, you know, it's an open question. Maybe you'll feel happy. Right, right. But no, spoiler alert, I think, well, it's complicated, as we'll see anyway, you plus science.

Yeah, you don't, you don't miss anything, right? Like, you maybe didn't get to read as many cool stories in the New York Times about stuff that got discovered, but what's the big deal? The impact happens like 20 years later. It's like, so what is this now? 2024?

We lost all of, say, 2023 science. So we're working with 2022. That means in 2043 or whatever, we're working with the 2022 science instead of the 2023 science. And so, like, when finally the science bears fruit and turns into mRNA, vaccines or whatever, the next thing is going to be in 20 years that gets delayed a year. So it's not that in the immediate term you really notice necessarily the loss to science, but technology is, a lot of technology is built on top of science.

It takes a long time for those discoveries to get spun out into technologies that are out there in the world. It takes a long time to spin them out. It takes even longer for them to diffuse throughout the broader world. We try to model all this stuff, but that's when you would notice it. You wouldn't notice, but you would maybe notice in a couple decades, you'd be like, oh, I feel a little bit poorer.

No, you wouldn't say that, but got it. That makes sense to me. Okay, let's talk more about your model. How did you estimate how much a year's worth of scientific progress increases life expectancy and individual incomes? Yeah.

So, like, to start us GDP per capita, as I said, I think earlier it grows on average, like 2% per year for the last century. Some part of that is going to be attributable to science. Not all of it. Some of it is attributable to technological progress, but again, actually not all of it. Even though economists say in the long run, technological progress is kind of the big thing in the short run.

In the short run can be pretty long. I guess there's other stuff going on. For example, women entered the workforce over the 20th century in the US, a lot more people went to college. And those kinds of things can make your society more productive, even if there's not technological progress going on at the same time. So there's a paper by this economist, Chad Jones, the past and future of economic growth.

And he tries to sort of parcel out what part of that 2% growth per year that the us experiences is productivity growth from technological progress, and what's the rest? And he sort of concludes that roughly half is from technological progress. So say 1% a year. So we're going to start there. But again, technological progress, part of that comes from science.

Part of that comes from other stuff. Just firms, Apple doing private research inside itself. And so how do you figure out the share that you can attribute to science? Well, there's a couple different sources we looked at. So there was this cool survey in 1994 where they just surveyed corporate R and D managers and they're like, how much of your R and D depends on publicly funded research?

They didn't ask specifically about science, but the government funds most of the publicly funded science. So it's like a reasonable proxy. And this is 19, 94, 30 years ago. And they're like 20%. 20% of our science depends on it.

Another way you can look is like more recent data. You can look at patents. So patents describe inventions. They're not perfect, but they are tied to some kind of technology and invention. And they cite stuff.

They cite academic papers, for example. And so about a quarter, actually 26% us patents in the year 2018 cited some academic paper. Okay, so this is sort of in the same ballpark. But there's reasons to think that maybe this, like 20% to 26% is kind of understating things, because there can also be these indirect linkages to science. So if you're writing software, if you're using a computer at all, you know, it runs on chips.

Chips are super complicated to make and like maybe are built on scientific advances. And so that's not going to necessarily be captured. If you ask people, does your work rely on science? It relies on computers, but I'm not going to tell you. It relies on science.

So one way you can quantify that. Again, patents. Some patents cite academic papers. Patents also cite other patents. And so you could be like, well, how many patents cite patents that cite papers?

Maybe that's a measure of indirect things. And if you have any kind of connection at all, then you're up to like 60% of patents are sort of connected to science in this way. So those are sort of two ways, surveys, patents, and then there's a third way, which is based on, like, statistical approaches where you're like, you're basically looking to see if there was a jump in how much science happened. Maybe it's R and D spending, maybe it's measured by journal publications. What happened to productivity in a relevant sector down the road?

And do you see like the jumps matched by jumps in productivity later? Those find really big effects, like almost one to one. So like a 20% increase in how much science happens leads to 20% increase in productivity or so on. Wow. And so anyway, I think, like, you know, this indirect science stuff and this other stuff, it sort of suggests to me, like, if you just stopped science, we're like, we're done with science.

We're not doing it anymore. I think in the long run, that would almost stop technological progress. We would probably get. That's just my opinion, but we'd get a long way from improving what we have without any new scientific understanding. But we'd hit some kind of wall.

But that's actually not what we're investigating in this report. We're like, we're not going to shut down science forever. We're just going to shut it down for one year. That gets into some of these things that I think you alluded to earlier about. Maybe it's not such a big deal.

Maybe we can push on without one years of science. And there's a couple reasons to think that. Maybe that evidence I cited earlier is sort of overstating things. Like you cited a paper, but does that really mean you needed to know what's in that for you to invent the thing? And we don't really know.

We know that they're more valuable, and there's sort of other clues that they were getting something out of this connection to science, but we don't really know. And then also you could imagine science is like, innovation is like fishing out good ideas. We're fishing out of a pond. New technologies and science is like a way to restock the pond because we fish out all the ideas we sort of do everything. But science is like, oh, actually there's this whole electromagnetic spectrum, and you can do all sorts of cool stuff with it or whatever.

And maybe if we don't restock the pond, people just overfish, like, fish down the level that's in there and there's some consequence, but it's not. It's not as severe. So anyway, like, on balance, you kind of have to make a judgment call, but we end up hewing closer to the sort of like, direct evidence. So, like, the surveys where people say it relied directly on evidence, or the patents that directly cite papers. And, like, our assumption is, if you give up science for one year, you lose a quarter of the growth that you get from technological progress.

So that's like, instead of growing at 1% a year, you grow at 0.75% for one year. And again, that all happens after a very long delay because it takes a long time for the science to percolate through. Right. In that first year, they're still mining the science from two decades ago because of how that knowledge disseminates. Cool.

Luis Rodriguez
Okay. How do you think about the effects on health? Yeah. So to start there, you're like, how are you going to measure, what's your preferred way of measuring health gains? Health is this sort of multidimensional thing.

Matt Clancy
And we're going to use life expectancy with the idea that sort of a key distinction of health is like alive or dead or whatever. And there was this paper Felicity C. Bell and Michael L. Miller wrote in 2005 that collected and tabulated all this data in the US on basically life expectancy patterns over the last century. So they've got actually the share of people who live to different ages for, like, every year going back to, I think, 1900.

So if you're in 1920, if you're born in 1920, we can look at this table and say, how many people live to age 510, 2030, etcetera. We can look again at 1930, if you were born in 1940. And as you would expect, the share of people surviving to every age tends to go up over time. And because they did this report for the Social Security Administration, and the Social Security Administration's goal is we want to know, how much are things going to cost? How much is our benefits going to cost in the future, how many people are going to survive into old age?

And so this report forecasts into the future out through like, 2100. And so they have other tables that are like, if you're born in 2050, what do we think? How many people will survive to age 510, 1520. And so I'm going to use that data rather than trying to invent something on my own. And the thing that's kind of nice about this is because they've got these sort of estimates for every year, we can have this very concrete idea about, like, if you lose a year of science.

Well, you could imagine we don't actually do this, but, like, well, then you lose a year of the gain. So if you would have been born in 2050, but we skip science for a year, instead of being on the 2050 survival curve, you're on the 2049 survival curve. So we don't actually do that because, again, I don't think you lose everything. If you lose a year of science, we end up saying you lose, like, seven months. And so, like, to sort of, how do we come up with that?

This is a little bit of a harder one. I don't think there's quite as much data as we could draw on. But you can imagine that, like, health comes from scientific medical advances plus a lot of other stuff that is related to, for example, income. Like you can afford more doctors, more sanitation and so on. And there are some studies that try to portion out across the world, you know, how do different factors explain health gains?

And they say, oh, a huge share, 70 or 80% of the health gains comes from technological progress. But if you dig in, it's like, not actually technological progress. It's more that, like, we can explain 20% to 30% of the variation with measurable stuff, like how many doctors there are or something. The rest, we don't know what it is. So we're going to call it technological progress.

So we want to be a little cautious with that. But on the other hand, I think it makes sense that science seems important for health. And you do see that borne out in this data, too. Like, if you look at the patent data, medical patents cite academic literature, like, at twice the rate, basically as sort of the average patent. And so where we come down is sort of like, all right, we're going to say 50% of the health gains come from science.

The rest comes from income. But a little bit of the income gains also come from science, and that's how you end up with seven months instead of six months. Okay, got it. I mean, I guess the last thing to say is that you can tinker with different assumptions in the model and some of this stuff and see how it matters. I think that.

Are these exact numbers right? No. Are they, like, in an order of magnitude? Probably, yes. Like, I would be very surprised if they were more than like twice or half.

Luis Rodriguez
Great, then let's talk about discount rates. So usually when I hear people discount the value of some benefit that some future person I don't know, living centuries from now will get, I have some kind of, I don't know, mild moral, that's not a good thing to do. Who says that your life is that much more valuable than someone living in a few centuries just because they're living in a few centuries and not alive yet? Why are you discounting future lives the way you are? Yeah, and this is a super important issue.

Matt Clancy
And the choice of this parameter is really important for the results. And as you said, it's standard in economics to just weigh the distant future. It counts for less than the near future, which counts for less than the present. And there's lots of justifications for why that's the case. But I take your point that morally it sort of doesn't seem to reflect our ethical values.

If you know that you're causing a death 5000 years in the future, why isn't it as bad as causing a death today? And so the paper ends up at the same place where it's got like a standard economic discount rate. But I think I spend a lot more time trying to justify why we're there and sort of giving an interpretation of that discount rate and trying to more carefully justify it on grounds that are not like we morally think people are worth less in the future. And instead it's all about epistemic stuff. What can you know about the impact of your policies?

And the basic idea is just that the further out you go, the less certain we are about the impact of any policy change we're going to do. So remember, ultimately we're sort of being like, what's the return on science? And there's a bunch of reasons why the return on science could change in the distant future. It could be that science develops in a way in the future such that the return on science changes dramatically. We reach a period where there's just tons of crazy breakthroughs.

So it's crazy valuable that we can do that faster, or it could be that we enter some worse version of this time of perils. And actually science is just always giving bad guys better weapons. And so it's really bad. But there's a ton of other scenarios too. It could be just that we are ultimately thinking about evaluating some policy that we think is going to accelerate science, improving replications or something.

But over time, science and the broader ecosystem evolves in a way that actually, the way that we're incentivizing replications has now become like an albatross around the neck. And so, like, what was a good policy has, like, become a bad policy. Then a third reason is, like, just, there could be these crazy changes to the state of the world. There could be disasters that happen, like super volcanoes, meteorite impacts, nuclear war, out of control climate change. And if any of that happens, maybe you get to the point now where, like, you know, our little science policy, metascience policy stuff doesn't matter anymore.

Like, we got way bigger fish to fry, and, like, the return is zero because nobody's doing science anymore. It could also be that the world evolves in a way that, you know, the authorities that run the world, we actually don't like them. They don't share our values anymore, and now we're unhappy that they have better science. It could also be that, like, transformative AI happens. So, like, the long story short is like, the longer term time goes on, the more likely it is that the world has sort of changed in a way that the impact of your policy, you don't know what the, you can't predict it anymore.

And so the paper simplifies all these things. It doesn't care about all these specific things. Instead, it just says, we're going to invent this term called, like, an epistemic regime. And the idea is that if you're inside a regime, the future looks like the past. And so the past is a good guide to the future.

And that's useful because we're saying things like 2% growth has historically occurred, we think it's going to keep occurring in the future. Health gains have looked this way. We think they're going to look this way in the future. As long as you're inside this regime, we're going to say that's a valid choice. And then every period, every year, there's some small probability the world changes into a new epistemic regime where all bets are off and the previous stuff is no longer a good guide.

And how it could change could be any of those kinds of scenarios that we came up with. Then the choice of discount rate becomes like, what's the probability that you think the world is going to change so much that historical trends are no longer a useful guide? And I settle on 2% per year, like a one in 50 chance. And where does that come from?

Open Phil had this AI worldviews contest where people, there was sort of a panel of people judging what would the probability of transformative AI happening. And that gave you a spread of people's views about what's the probability we get transformative AI by certain years, and you get something a little less than 2% per year is the probability. If you look in the middle of that, then Toby Ord has this famous book, the Precipice. And in there he has some forecasts about x risk that is not derived from AI, but that covers some of those disasters. I also looked at, like, there's been sort of trend breaks in the history of economic growth since there's sort of been one since the industrial revolution.

And maybe we expect something like that anyway, we settle on sort of a 2% rate. And the bottom line is that we're sort of saying people in the distant future in this model don't count for much of anything in this model. But it's not because we don't care about them. It's just that we have no idea how to, like, if what we will do will help or hurt their situation. Another way to think of 2% is that, like, on average, every 50 years or so, the world changes so much that you can't use historical trends to extrapolate.

And one last caveat, which we'll come too much later, I think, is that extinction, when you think of discounting in this way, extinction is sort of a special class of problem that we will come back to. Okay, so I actually found that really, really convincing. I know not everyone shares the view that people living now and people living in the future have the same moral weight, but that is my view. But I'm totally sympathetic to the idea that the world has changed drastically in the past and the world could change drastically again. And there are all sorts of ways that centuries from now could look so different from the present that we totally shouldn't be counting the benefits that, I don't know, people living a thousand years from now could get from our science policy.

Luis Rodriguez
If there are all sorts of things that could mean our science policy has absolutely no effect on them or has wildly more effects on them for the reasons you've kind of given. So I think I didn't expect to be so totally persuaded of having a discount rate like that, but I found that really compelling. I guess before we move on, I'm a little skeptical still of, like, 2% in particular. Like, something really weird happening once every 50 years was really helpful for making that very concrete to me. But it also triggered, like a.

Wait, I don't feel like something really, really weird happens every 50 years. I feel like we've done something like move from an epistemic regime to another a couple of times based on kind of how you're describing it and how I'm imagining it, where, like, one example would be going from very slow economic growth to going to very exponential economic growth. Maybe another is like when we came up with nuclear weapons. Am I kind of underestimating how often these things happen, or is there something else that explains why I'm finding that counterintuitive? No, I think that if you look historically, 2% seems like we're like something changing every 50 years.

Matt Clancy
That seems too often. I think that's what you're saying. Right, that is what I'm saying, yeah. It's rare that things like that happen, and I think that that's. I would share that view.

I think that the reason we pick a higher rate is because of this view that the future coming down the road is more likely to be weird than the recent past. And so that's kind of embedded in there. Like, that's implicit in this. Like, people's views on, is there going to be transformative AI? Like, transformative AI by definition is transformative.

I also think that, like, remember, this is a simplification. I think more realistically what's going to happen is that there'll just sort of be a continuous, gradual increase in sort of fuzz and noise around your. You're not going to, like, move from one to another. Right. It's very convenient to model it as just sort of like we move from one regime to another.

But I think, like, in, it's more like this and sort of, that's sort of captured by that. And then lastly, I'd say, like all else equal, a conservative estimate where you're like, a little bit more humble about your ability to project things into the future aligns with my disposition rather than. But I guess other people would say, well, then why not pick 5%? So you still have to pick a number at some point. Yeah, yeah, fair enough.

Luis Rodriguez
Okay, so that's kind of the framework behind the model. Let's talk about how big the returns to science are if we kind of ignore those costs we talked about earlier under what basically feel like realistic assumptions to you. So in other words, for every dollar we could spend on science, how much value would we get in terms of. Yeah. Improved health and increased income.

Matt Clancy
Yeah. So setting aside that whole bio time of peril is dangerous from technology. So if somebody doesn't think that that's actually, like, a realistic concern, they'll still find. I think this part useful. Our kind of current preferred answer is that every dollar you spend on science, on average, has dollar 70 in benefits.

And that's subject to two important caveats. So, one, this is ongoing work. And since the original version of the report went up, we improved the model to incorporate lengthy lags of international diffusion. So when something gets discovered in America, it's not necessarily when it gets. When it's available in Armenia or whatever.

And so the version of the paper with that improvement is the version of the report that's currently available on archive. Anyway, $70 per dollar. The other interesting caveat is that, you know, a dollar is not always a dollar. A dollar means different things to different people. What we're talking about, what we care about, is people's well being and welfare.

And a dollar buys different amounts of welfare. If you give a dollar to Bill Gates, you don't buy very much welfare. If you give a dollar to somebody who's living on a dollar a day, you buy a lot more welfare with that dollar. At open philanthropy, we just have a benchmark standard that we measure the social impact of stuff in terms of the welfare you buy. If you give a dollar to somebody who's earning $50,000 a year, which is roughly a typical american in 2011, when we set this benchmark.

Luis Rodriguez
Cool. Okay, so spending $1 giving you a $70 return seems like a really, really good return to me. Yeah, it's great. Yeah, it is good. The key insight is that, you know, compare it to, you know, when you talk about $70 of value, you can imagine somebody just giving you $70.

Matt Clancy
Right? And the benefits we get from science are very different than that kind of benefit. So if I gave the typical american $70, that one, that's like a one time gift. And economists have this term rival, which means, like, kind of one person can have it at a time. So if I have the $70 and I give it to you now, you have the $70, and I no longer have $70.

Okay? So it's sort of this transient one off thing that the knowledge that is discovered by science is different. It's not rival. If I discover an idea, I can give you the idea. And now we both have the idea.

And lots of inventors in practice can build on the idea and invent technologies. And technologies are sort of. You can think of them as, like, blueprints for things that do things people want. And that is also a nonrival idea. So when science gives you $70 in benefits, it's lots of little benefits shared by, in principle, everyone in the world and over many, many generations.

So the individual impact per person is much smaller, but it's just multiplied by lots of people. And sort of the baseline is, you know, we spent hundreds of billions of dollars per year on science. And what's that getting us? It's getting us a quarter of a percent of income growth in this model, plus this marginal increase in human longevity. But to a first approximation, after lots of decades, everybody gets those benefits.

And that's what's different about just sort of a, like a cash transfer. And. Yeah, another way to think about if this is a lot and this is sort of my plug for metascides. So like 70 70 x return is great, but that actually wouldn't clear open fills own threshold for the impact we want to make our grants to make. Interesting, what that means is like we would not find it a good return to just write a check to science, right?

We pick specific scientific projects because we think they are more valuable than the average and so on. But imagine instead of just writing a check to science, like you could say you have a billion dollars, a billion dollars per year, you could either write it to science or you could spend it improving science, trying to build better scientific institutions, make the scientific machine more effective. And just making up numbers say that you could make science 10% more effective if you spent a billion dollars per year on that project. Well, we spend like $360 billion a year on science. If we could make that money go 10% further and sort of get 10% more discoveries, it'd be like we had an extra $36 billion in value.

And if we think each dollar of science is worth 70, generates $70 in social value, this is like an extra $2.5 trillion in value from this like per year from this $1 billion investment. That's like a crazy high roi. Like 2000 times as good. And that's the crazy, yeah, that's like the calculation that underlies why we have this innovation policy program and why we're like why we think it's worth thinking about this stuff, even though there could be these downside risks and so on, instead of just doing something else. Right.

Luis Rodriguez
Okay, cool. That was really concrete. So that seems just like a really insane return on investment, as you've said. Does that feel right to you? Yeah.

Matt Clancy
So it's tough to say because as I said, the nature of the benefit is very sort of alien to your lived experience. It's not like you could know what it's like to be lots of people getting a lot of little benefit, but I did do this one exercise to try to check. It's like, the sense check if this is right. And I was like, all right, let's think about my own life. You know, spoiler.

I'm 40, so, like, I've had 40 years of technological progress in my lifetime. And if you use the framework we use to evaluate in this model, it should. It says, like, you know, if progress is 1% per year, my life should be, like, 20% to 30% better than if I was my age now, 40 years ago. And so I, like, thought, like, all right, does it seem plausible that technological progress for me has generated, like, a 20% to 30% improvement? And, like, so I spent, like, a.

I don't know, a while thinking about this. And I think, like, yeah, it is actually, like, very plausible and sort of interesting exercise because it also sort of helps you realize why maybe it's hard to see that value. Like, one is. It affects the amount of time I have to do different kinds of things. And when you're remembering back, you don't remember that you spent 3 hours vacuuming versus, like, 1 hour vacuuming or something.

You just remember you were vacuuming. And so it kind of compresses the time, and so you lose that. And then also, there's just so many little things that happened that it's hard to sort of, like, it's like, it's easy to evaluate, like, one big impact thing, because you can if I had it or I didn't, but when it's just, like, a thousand little things, it's harder to value. But, like, do you want to hear, like, a list of. Yeah, absolutely.

A lot of little things. All right, start, like, with little trivial things. Like, I like to work in a coffee shop, and because of, like, the Internet and all the technology that came with, like, computing, I can work remotely in a coffee shop most days for part of the day. I like digital photography. These are just, like, trivial things.

And, like, you know, not only do I like taking them, but, like, I've got them on my phone. The algorithm is always showing me a new photo every hour. My kids and I look through our pictures of our lives, like, way more often than when I was a kid. Looking at, like, photo albums a little bit less trivial is, like, art, right? Like, so my access to some kind of art is way higher than if I'd lived 40 years ago.

Like, the Spotify, you know, wrapped came out a couple, I don't know, in November, and I was like, oh, man. I spent apparently, like, 15% of my waking hours listening to, like, music by, it said, like, a thousand different artists. Similarly, like, with movies, I'm, like, watching lots of movies that would be hard to access in the past. Another sort of dimension of life is like learning about the world. I think learning is a great thing.

And, like, one, we just know a lot more about the world sort of through the mechanisms we just through science and technology and stuff. But there's also been, like, this huge proliferation of sort of, like, ways to ingest and, like, learn that information in a way that's useful to you. So there's podcasts where you can have people come on and explain things, but there's data, there's, like, explainers, there's, like, data visualization is way better. YouTube videos, large language models are like a new thing and so forth. And, like, there's living literature reviews, which is, like, what I write.

So, like, my whole, like a third of what I spend my time doing didn't exist like, 40 years ago. Another dimension that life is, like, worth living and valuable is like, your social connections and so on. And for me, like, remote work has made a big difference for that. Like, I grew up in Iowa, I have lots of friends and family in Iowa. Like, and Iowa is not like the hotspot of economics, of innovation stuff necessarily.

But I live here. I work remotely for open Phil, which is based in San Francisco. And then remote work also has these time effects. So I used to commute for my work 45 minutes a day each way. I was a teacher, a professor, so that was not all the time.

I had the summers off and so on. But anyways, still saving 90 minutes a day, that's a form of life extension that we don't normally think of as life extension, but it's extending my time. Then there's tons of other things that have the same effect where they just free up time. I used to, when I was a kid, drive and go shop, walk shop floors a lot to get the stuff with my parents that you need. Now we have online shopping, and a lot of the mundane stuff is just comes to our house shipped.

It's automated and stuff. Stuff. We've got a more fuel efficient car. We're not going to the gas station as much. Got microwave steamable vegetables that I use instead of cooking in a pot and stuff.

We've got electric snowblower. It doesn't need seasonal maintenance. Just a billion tiny little things. Every time I tap to pay, I'm saving maybe a second or something. Then once added up with the remote work, the shopping, I think this is giving me weeks per year of stuff that I wouldn't be doing, but then, like, I can keep going.

So there's like, it's not just that, like, you have. You don't have to do stuff that you wouldn't normally do. There's other times when like, you've got some weird, you know, it helps you make better use of time that might otherwise sort of not be available to you. So, like, all these odd little moments that you're, like, waiting for the bus or for the driver to get here for the kettle to boil, the doctor office, whatever, you could be on your phone, and if you're. That's on you, how you use that time, but you have the option to sort of learn more and do interesting stuff.

Audio content, the same, like, for like a decade, half the books I've quote unquote read per year have been like, audiobooks and, you know, podcasts. And so I'm sure maybe there's people listening to this podcast right now while they're doing something that they otherwise normally would not be able to, like, learn anything about the world. So they're driving or walking or doing the dishes or folding laundry or something like that. So that's sort of like all the tons of tiny little things. And this is just setting aside medicine, which is like, equally valuable to all that stuff, right?

Like, not being. I've been lucky in that I haven't had, like, life threatening illnesses, but I know people who would be dead if not for, like, advances in the 40 years, and they're still in my lives because of, you know, this stuff. And then I benefited, like, everyone else from the mRNA vaccines that sort of ended lockdown and so forth. So, long story short, it seems very plausible to me that like, a 20 to 30, like, the framework we're using, which says this should be worth 20% to 30% of my wellbeing, seems plausible. Over a 40 year lifespan.

I'm luckier than some people in some respects, but I've also benefited less than other people in some respects. Like, if somebody had a medical emergency, that they wouldn't be alive here today, they could say that they benefited more from science than me. And so if this is happening to lots of people now and in the future, that's where I start to think, like, $70 per dollar in value starts to seem plausible. I was already with my follow up questions that were like, I'm not so sure that you're 20% to 30% better off than you were in 1984, but I think I'm just sold so we can move on. Well, I guess thanks for the indulging me, because I think it's hard to get the sense unless you really start.

Luis Rodriguez
To go through a long list. And it's just like, so many little things. And if you think of anyone individually, like, oh, this guy's happy. He can go to coffee shops, boo hoo. But that's just like a tiny sliver of all the different things that are happening.

Yeah, yeah, I think I still have some. It's closer to a philosophical uncertainty than an empirical one. That's just like humans seem to have ranges of well being they can experience. And it might just be that, like, a person that grew up kind of like you or I did in the US without anything terrible happening to them, can't be that much better now than they were 40 years ago just because of the way our brains are wired. But it also seems like there are some things that clearly do affect well being, and a bunch of those, especially the health things, clearly do.

And I definitely put some weight on all of these kind of like, small things that add up also making a difference. It does fill me with horror that I couldn't have listened to the music and podcasts I would have liked to have listened to while doing hours of chores at home that now one, I get to do while doing fun stuff, and two, get to do with a Dyson. My Dyson is awesome. Yeah, we have irobot that we use on some of our things. But no, actually, this concern that does, this was the thing, this is part of what motivated me is, does innovation policy really matter for people who are already living on the frontier?

Matt Clancy
And I came away thinking it may be hard to see, but I think it does. Okay, so let's leave the benefits aside for now and come back to the costs. So the costs we're thinking of as the increase in kind of global, biological, catastrophic risks that are caused by scientific progress. And to estimate this, you used forecasts generated as part of the Forecasting Research Institute's existential risk persuasion tournament. Can you just start by actually explaining kind of the setup of that tournament?

Yeah. This tournament was hugely important because the credibility of this whole exercise hinges on how are you going to parameterize, or how are you going to estimate these risks from technologies that haven't yet been invented? Anyway, the tournament was held in 2022, I think 169 participants, half of them are super forecaster generalists, meaning that they've got experience with forecasting and they've done well by various contests or other things. The other half are domain experts. So they're not necessarily people with forecasting expertise, but they're experts in the areas that are being investigated.

They got people from kind of four main clusters, which is biosecurity, AI, nuclear stuff, and AI risk. There might have been climate, but I can't remember. I focused mostly on the biorisk people. There are 14 biorisk experts. Who they are, we don't really know.

It's like anonymous. Anyway, the format of this thing, it's all online, and it proceeded over a long time period. People forecast, I think it was 59 different questions related to existential risk. First they did them by themselves as individuals. Then they were put into groups with other people, kind of of the same type.

So super forecasters were grouped together. Biorisk experts were grouped together, and then they collaborate, they talk to each other, and then they can update their forecasts again as individuals. Then the groups are combined. And so you get super forecasters in dialogue with the domain experts. They update again.

And then the last stage, they get access to arguments from other groups that they weren't part of. And it's well incentivized to try and do a good job. You get paid if you are better at forecasting things that occur in the near term, 2024, 2030. But they want to go out to 2100. And so it's a real challenge.

How do you incentivize people to come up with good forecasts that we won't actually be able to validate? And they try a few different things, but one is, for example, you get awards for writing arguments that other people find very persuasive. And then they tried this thing where you get incentivized to be able to predict what will other groups predict? There's this idea, I think Brian Kaplan maybe coined it, of the intellectual Turing test. Like, can I present somebody else's opinion in such a way that they would say, yep, that's my opinion.

And so this is sort of getting at that. Like, do you understand their arguments? Can you predict what they're going to say is the outcome? And so, you know, fortunately for me, in this project, they asked a variety of questions about genetically engineered pandemics and a bunch of other stuff. The main question was like, what's the probability that a genetically engineered pandemic is going to result in 1% of the people dying over a five year period?

And they debated that a lot. At the end of the tournament, they asked two additional questions. What about 10% of the people dying? Or what about the entire human race going extinct. So those last ones, it's not ideal.

They weren't based on the same level of debate, but these people had all debated similar issues in the context of this one. Right. Okay, so that's the setup. Yeah. What were the bottom lines relevant to the report?

Luis Rodriguez
What do they predict? Did they kind of think there'll be a time of biological perils? So one thing that was super interesting was that there was a sharp disagreement between the domain experts and the superforecasters, and this sharp disagreement never resolved, even though they were in dialogue with each other and sort of incentivized in these ways to try to get it right. They just ended up differing in their opinions in a way that never, the gap was never closed, and it differs quite substantially. So if you start with the domain experts, these are the 14 biosecurity experts.

Matt Clancy
They thought that sort of a genetically engineered pandemic disaster is going to be a lot more likely after 2030. The superforecasters thought the same thing, that it's going to be more likely, but their increase was smaller. And because there's the sharp disagreement between them throughout the whole report, I basically like, all right, here's the model with one set of answers. Here's the model with the other set of answers. And the kind of key, key thing that they, that we have to get from their answers is what is the increase in the probability that somebody dies due to these new biological capabilities?

And like I said, there's a sharp disagreement. The superforecasters think they gave the answers that, you know, if I put it in the terms of the model implies like a 0.0021% probability per year. The domain experts, 0.0385, the numbers don't matter. The key thing is the numbers do matter, but they're not going to be meaningful to somebody who's never thought of, it's like the domain experts think the risk is 18 times higher than the superforecasters do. Is there a way to put that in context?

Luis Rodriguez
Like, I don't know, what's my risk in the next century or something, of dying because of the resurrection, according to each of those groups. So we all just lived through this pandemic event. So we all actually have some experience of what these things can be like. And the Economist magazine did this estimate that in 2021, the number of excess deaths due to Covid-19 was 0.1% of the world. And so that's like a useful, I think that's a useful anchor that we're familiar with.

Matt Clancy
So if you want to take the superforecasters, they're going to say something like that happens every 48 years. And I should say it's not that a Covid like event happens every 40 years. It's very like specifically that like there is an additional genetically engineered type Covid type event every 48 years as implied by their answers, the domain experts, you know, it's like this is happening every two and a half years, is like their forecast. And I think another important thing to clarify is that these answers are also consistent with them thinking, for example, that it could be something much worse less often. It could be something not as bad more often, that kind of thing.

But that's a way to calibrate the order of magnitude that they're talking about. Okay. Yeah. Okay. So it could be something like, at least according to the domain experts, you have something half as bad as Covid once a year, or it could be twice as bad every four years.

Luis Rodriguez
But that's the kind of probabilities we're talking about. And they do seem even more different to me once you put them in context like that. So yeah, maybe we should actually just address that head on. Yeah. How do you think about the fact that the superforecasters and domain experts came up with such different results, even though they had months to argue with each other?

And like in theory, it would be really nice if they could converge. So I think this is like a super interesting open question that I'm fascinated by. I didn't participate in this tournament, I'm not one of the sort of anonymous people, but I did engage in a very long back and forth debate format with somebody about explosive economic growth. And we're on opposite sides of the position. And again, I think there was no convergence in our views.

Matt Clancy
Basically, that's pretty crazy. It's interesting because the virus people in this are not outliers. There was this disagreement between the superforecasters and domain experts just across the board on different x risk sort of scenarios. And I could say more, but I think like the core thing, my guess is that people just have different intuitions about how much you can learn from theory. And so I think some people, maybe myself included, are like, until I see data, my opinion can't be changed that far.

You can make an argument, but most of them, like, well, I can't necessarily pick a hole in it, but you might just be missing something. Theoretical speculation has a bad track record maybe, but I think that it's actually an open. We don't actually know that. Like who's right. Like we don't know, like if I'm right, that sort of, like, people have different intuitions about how much you can learn from theory.

We don't have, as far as I know, good evidence on like how much should you put weight should you put in theory, like, so hopefully we'll find out over the next century or something. So, you know, I have views on kind of who's right. But through the report, I'm also not a biologist. And like, I didn't participate. I just, like I said, I present them all in parallel and try to sort of hold my views until the end.

Luis Rodriguez
Okay. So as we talk about the results, we will talk about the results you get if you use the estimates from both of these groups and we'll be really clear about what we're talking about. But then, yeah, I do want to come back to who you think is right because, yeah, I mean, people are going to hear both results, but like, the results in some cases are pretty different and have different impacts, at least in kind of the degree to which you'd believe in like one set of policy recommendations over another. And, yeah, I want to give people a little bit more of something to kind of have when thinking about which set of results they buy. But we'll come back to that in a bit.

Let's talk about your headline results. So you estimated the net value of one year of science, factoring in the fact that science makes people healthier and richer, but also might bring forward the time of biological perils. What was that headline result? Yeah, so like you said, it depends on whose forecast you use. Earlier I said that if you just ignore all this issues, you know, the return on science is like $70 per dollar.

Matt Clancy
It was actually $69 per dollar. And that only matters because like, if you drop the, if you use the superforecaster estimates, the return drops by a dollar to $68 per dollar. If you use the domain experts, you see it drops from 69 to roughly $50 per dollar, which is still getting $50 for every dollar spent is still really good. But another way to look at it is these bio capabilities, if they're right, erase a quarter of the value that we're getting from science because dropping from 70 to 50 is almost 25% of the value, basically. Yeah.

I think this whole exercise, we're not physicists charting with precision. If we're going to see Higgs boson particles or something. This is more like quantitative thought experiments. If you quantify the benefits based on historical trends, you quantify the potential costs based on these forecasting things, you can do a horse race. My takeaway from this exercise is basically the benefits of science historically have been very good, even relative to these forecasts.

So you should keep doing it. But also, if these domain experts are right, the risks are very important and quantitatively matter a lot and addressing them is super important too. Yeah. Okay, so let's see. I feel like there are a couple of takeaways there.

Luis Rodriguez
So one is just like, in particular, if the domain experts are right, the risks are really eating some of this value. And that's a real shame, even if it still points at science being net positive. I guess also just to affirm, I mean, even though you've done this super, super thoroughly, it's just such a massive difficult question that your results are going to be. They're going to have really wide error bars still. But I love this kind of exercise in general because I feel like people could make really compelling arguments to me about science being negative because of engineered pathogens and like people are going to have more and more accessibility to, I don't know, the ability to create a new pathogen we've never seen before that could have really horrible impacts on society.

And I could be like, wow. Yeah, wow. What if science is net negative? And then I could also be convinced, I guess, through just. Yeah.

Argument that like the gains to science from science are really, really important. And like, yes, they increase these risks, but we obviously have to keep doing science or society will really suffer. And it just really, really, really matters what the kinds of numbers, even if they're wide ranging, you put on those different kind of possibilities are. And you just, yeah, you just don't get that through argument. Um, so, yeah, 100%.

Matt Clancy
And I like when, when we sort of started this project that was like the state of play was just like, yeah, there are arguments that seem very compelling on either side. And I think that in that situation, it's very tempting to be like, well, maybe we'll just say it's like 50 50, you know, like, uh, totally. But like the, the quantities really matter here. And so when we started this research project, I was like, well, we have to see if like best case scenario, one of like, you know, the benefits way outweigh the cost or vice versa. Because like, as you said, there's going to be these air bars.

But I was like, that's not going to happen. I'm sure that we're going to end up with just this muddled answer where it's like, well, if you, if you buy the arguments that say risk, you get that it's bad. And if you buy the arguments and say it's not risky. So, anyway, I'm very glad we checked, and I'm super glad that there was this existential risk persuasion tournament that I think gave us numbers that were more credible than for what reasonable people might forecast. If they're in a group, they've got incentives to try to figure it out and so on.

It's not just me or kicking around. Yeah, yeah, yeah, yeah. Yep, I completely agree. And yeah, I see what you mean. It could have been the case that, like, you play around with a few assumptions that people really disagree on, and that changes the sign of your results.

Luis Rodriguez
And so you still don't get any clarity. But in fact, uh, this just, like, did give you some clarity. And using even both of these estimates, you get positive returns from science. Okay, so there are these superforecaster estimates that you use, and then you also use domain experts. Um, but then you do this third thing that doesn't rely on either of those.

Can you explain that? Sure. So, you know, suppose you think that both these guys are really wrong. Like, you think it's maybe worse or better or something. Another kind of useful thing this model can do is say, what would the forecast need to be for the average return on science to flip from positive to negative?

Matt Clancy
So we saw that if it's 0.0021, it barely makes a difference. That's the superforecasters. If it goes up to 0.0385, which is 18 times as high, it chops off like a quarter of the value. How much higher does it need to go before you're actually. Science has flipped from being a good thing to a bad thing.

Luis Rodriguez
Right. Okay, so how likely do global catastrophic biological risks, because of these advances from science have to be before you're like, okay, let's pause science. Right? And so there's lots of different assumptions you can make in the model, but sort of the preferred one I do, which I think is the conservative one, is like, breakeven forecast is around 0.13% per year probability of dying from one of these new capabilities. That sounds really high.

Matt Clancy
Well, yeah, it is actually really high. So it's three times higher than the domain experts, and they were the more pessimistic group. And if you compare that to Covid, remember, that's actually, like, worse than 2021 Covid. So the economist estimated excess deaths due to Covid were like 0.1 in 2021. So this would be like every year.

The worst parts of COVID just sort of all the time, or, you know, something ten times as bad once every decade or something like that. So that's not what anyone is forecasting in this tournament, or at least not what the sort of group medians are. But that's what it would need to be for the model to sort of say, now you want to avoid that. The benefits of science aren't worth all of this. Okay.

Luis Rodriguez
But importantly, those results are only considering catastrophes that don't permanently affect the future of humanity. So in particular, it doesn't consider the possibility that biological catastrophe could cause human extinction. What happens when you account for the risk of human extinction? Yeah. So everything we've done so far has been kind of in this normal economic framework where you're discounting the future.

Matt Clancy
So stuff that happens in the distant future doesn't matter. And that's because we can't predict the impact of science or policies that far out. But extinction is kind of a different class of problem, as sort of we alluded to earlier. It doesn't have the same uncertainty about long run forecasts. Like if we actually go extinct, then we know we can predict a billion years from now, we'll still be extinct.

We can't predict a billion years from now if our little science policy will be very good or very bad, but we can't predict that. And so in that sense, if you think that faster science can affect the probability of something like extinction or even just a kind of collapse of civilization, that we are no longer having this advanced technological civilization that's making scientific progress, something like a Sci-Fi movie like the stand or the last man on earth or something like that. So in any of those cases, the sort of the discount 2% per year doesn't seem to make sense to do. So the way I handle that in the paper or in the report is basically build a second model that now kind of has two branches. So one branch is the same as everything we've talked about, benefits of science we're discounting because we don't know what's going to happen in the long run, stuff like that.

But then there's this other branch with a very small probability, very remote but not zero, the human population. We all go extinct, basically. And if we're extinct, well, now we know for sure what happens as far out as you want to go. But then the question becomes like, what is it worth not to be dead? So we don't know what the value of the policy will be, but we might expect that there's some value of not being dead that we can forecast out longer.

And you can then ask, in the model, like, is the very remote chance of losing all that worth the benefits that we get from science, which is kind of comes from this other branch in the tree and sort of all the modeling we've done? And so I don't try to estimate what the long run future value of the species is. Instead of, I sort of say, like, the model can spit out some guidance for what you might think it is and how you might think about these questions. The first problem we have, though, is that whereas the superforecasters and the domain experts disagree a lot about the probability of genetically engineered pandemics causing all this problem, they really disagree about the probability of extinction. If you recall, there was a 20 x, I think it was 18, but close to 20 x, difference in the annual probabilities of dying from one of these diseases that I brought up before.

But the difference on the probability that the species goes extinct from a genetically engineered pandemic is like 140 times different between the two. So it's pretty huge. Yeah. Yeah. So anyway, like, you can basically say, all right, if there's this very small chance that we go extinct, but we get these benefits if we don't, how bad does extinction need to be for us to not be worth it?

And if you think the biosecurity people are right, you know, the sort of more pessimistic group, well, you only really need to believe that the long run future is worth more than about 100 years of today's global wellbeing. So 8 billion people at this standard of living persisting for another 100 years. If you think the superforecasters are right, you need to have a much bigger number. It's like 20,000 years ish of that range. And, you know, this is interesting for a couple reasons.

One is that you guys have covered longtermism before on the podcast and so forth. The idea that kind of we should think a lot about our long run impact on the species. And one critique you sometimes hear of long termism is that actually, like, it doesn't. Like, you don't really need that baggage of, like, the long future distant population to affect your decision making. Like, if you think transformative AI is going to be a big deal, a lot of people think it's going to happen in our lifetime.

And so all you have to care about is yourself and your kids, and you make similar decisions. But science might be a situation where this framework does actually give you different decisions. Like, if you are, if you sort of are not putting much weight on that very long run future, you're going to say the benefits of science clearly outweigh the cost. And if you do put a lot of weight on it, then you're going to. It's plausible that you'll have the opposite conclusion.

And I'm like, I'm personally not 100% sure what to make of all this. Also, it sort of seems weird to make big decisions based on these parameters that we have kind of the least information about. It's really hard to know what these probabilities really should be. But I do think one bottom line I take away is that these issues are worth taking seriously, and they actually do affect some decisions we made in the innovation policy program. Yeah.

Luis Rodriguez
Okay. Just because those are kind of complicated but important conclusions, I want to make sure I understand them. So if you buy into the estimates by the domain experts, then you think that extinction risks are so plausible from these risks that science kind of pushes along or brings forward that all you have to do is think that there's like a plausible another hundred years of human existence that's roughly as good as the current human existence to think that it's just not worth risking the next hundred years for the benefits of science. Yeah, I mean, to be really precise, what the model is saying is if you had the choice to pause science for one year, which of course you don't, but that's the thought experiment the model chooses, you would give up some benefits, but you would also reduce your risk by a small amount, like one year worth of being in this sort of extinction risk zone. And if you think the chances of extinction are sufficiently high, which, you know, the domain experts think it's higher, then basically, yeah.

Matt Clancy
That's, they wouldn't make that. They would be happy to make that trade because they'd be willing to give up the benefits because they're not as high as the expected value of what you might lose. Very remote chance you lose it. But if you do, would be a big deal. And they don't think it's that remote, I guess you would say, compared to other people.

So that's how you think about it in the context of this model. Right. Okay. Which I think makes it actually even a more striking conclusion because I think I was kind of glossing over and saying you might not think accelerating science in general is worth the risk, but this is actually just like you might not think that the benefits of science over just one year are worth the risk of bringing forward kind of this time of biological perils, which, yeah, seems even more extreme and worrying to me. Yeah, it is.

I don't know. It's kind of, in some sense, it's like a surprising result. But also there's a sense in which maybe it's not surprising because it's surprising that it's 100 years. But I guess the thing that maybe is not surprising, that is if you think the human race is going to go on a long time, then that's a big number that can swamp lots of other considerations. Right.

Luis Rodriguez
And so the superforecasters don't think it's that likely. And so then you have to think that the future might have 20,000 years worth of value to be willing to pause science, which that seems like a way more difficult position to, I don't know, stand behind. I really don't know how to think about whether, I think we've got 20,000 years of good stuff ahead of us, but that's right. And, like, there's other kind of complicating things because it could also be that, you know, the, you have to think about things like, does what the species turn out to look like in 20,000 years? Is it the kind of thing that aligns with our values or not?

Matt Clancy
I don't know. It's weird stuff. It's weird stuff. Okay, so does that mean that if you put some weight, maybe you have to put a lot of weight on these domain experts views, you should just be really in favor of pausing science now for a year? I think that the model sort of says if you just take that part of the model, then maybe.

Yeah, but as we'll say, there are other stories you can tell that we also try to look at in the paper. So kind of two countervailing stories that we look at are basically ideas that better science could reduce extinction risk. Because if you think extinction is the whole ballgame and improving science can reduce that, then suddenly sort of things flip. And so there's kind of two ways that that could happen. One is it could be that better, faster, nimbler science, you know, like that is really well targeted at important problems that has, like that's going to accelerate science, but maybe it'll also reduce the dangers in this time of peril because we'll be able to, like, quickly identify and develop countermeasures, like we'll be more prudent maybe in what we fund and reduce risks in that way.

And so that's one thing I look at, and then the other is maybe if, maybe there's some endgame where you can reach a sort of stage of technological maturity where you invent technologies that render this whole period of time of perils and everything like a much less salient problem. Like maybe you invent like a miracle drug that can kind of cure anything. Or maybe more realistically, you develop far uvc lighting that can sort of sanitize the air, and you embed that all over your built environment or something like that. And maybe these kinds of. If we can get there, we don't have to worry so much about that.

And in that case, if you accelerate science, maybe you can get there quicker. And so those are sort of two scenarios that are. That we investigate. Yeah, let's talk about those. Actually, let's first talk about whether we could kind of do better science.

Luis Rodriguez
So I guess you've already kind of set a version of this. But it seems totally plausible to me that, like, under your remit could be something like help institutions that, I don't know, fund science, or decide, help decide kind of what science gets done. Figure out how to distinguish between technology that's like very likely to be purely good, versus technology that's reasonably likely to be dual use. Like, it does seem like you can at least make some distinctions and get better at noticing which is which. Or maybe it's like make institutions better at governing and regulating how science gets used or something.

Is it possible that throwing a bunch of. That makes this whole question less important because you just kind of get rid of this bad risk part of the calculation? Yeah, so it is possible. I was really interested in this idea, and I think there's some good theoretical arguments. Like, one interesting kind of theoretical argument is that if you discover some kind of really advanced scientific solution to problems, a huge group like governments can coordinate around it.

Matt Clancy
You can get the world to coordinate around making this thing happen, like making mRNA vaccines or something, and faster. Science can basically tilt the balance so that we have more of those kind of frontier things available to these big actors. And presumably that kind of cutting edge stuff that takes the coordination of hundreds or thousands of people wouldn't be available to sort of bad actors. But I ended up coming away a little bit pessimistic about this route, just because the kind of reduction in risk that you need to hit seemed really large. And so the way I looked at this was, I thought, rather than pausing science for a year and evaluating that, it was sort of like, what if you got an extra year of science somehow through efficiency gains, you know, the equivalent of an extra year?

And that allowed you to reduce by some percentage or some amount, the sort of danger from this time of perils, how big would that need to be? And I thought like a reasonable benchmark is, however much we reduce all cause death per year, that's what science is sort of trying to figure out. Now we have a bunch of scientists working on it, and we can look at how much life expectancy tends to go up between. So that's just a ballpark place to start. And if that's kind of what you get from an extra year, like you reduce the annual probability of death by some small amount.

And now you apply that to these estimates from the super forecasters or the domain experts. Well, it's just not big enough to make a difference. It doesn't move the needle very much in this model. Is that really kind of the best we could do, though, if we tried really hard to do better science, if we had some sense of what the risks were, and we have at least some sense, could better science just be really, really, really targeted at making sure those risks don't play out? I mean, I think it is possible.

And so I don't say that this is definitive or anything, but I say it would need to be better than we're doing against sort of the suite of natural problems that are arising. I'll give you two reasons. I'll give you a reason why to be optimistic and a reason to be pessimistic. A reason to be optimistic is that we actually do have pretty good evidence that science can respond rapidly to new problems. When Covid-19 hit, prior to Covid-19 the number of people doing research on that kind of disease was all but zero.

Afterwards, like one in 20 papers. This is based on a paper by Ryan Hill and other people. One in 20 papers was in some way related to Covid. And so basically the entire 25% of the scientific ecosystem, which is a huge number of people, pivoted to starting to sort of work on this. You can see similar kinds of responses in other domains, too.

Like the funding for different institutes in the National Institute for Health tends to be very steady and just sort of goes up with inflation or whatever and doesn't move around very much. But there were like three times when it moved around quite a lot in the last sort of several decades. One was in response to the AIDS crisis and another was after 911, when there was a lot of fears about bioterrorism. And so these both are examples of sort of something big and salient happened, and the ecosystem did change and pour a lot of resources into trying to solve those problems. Kind of the counterargument, though, like why you might want to be pessimistic is like it took a long time to sort out AIDS, for example.

And in general, even though science can begin working on a problem really quickly, that doesn't mean that a solution will always be quick and arriving. And, you know, throughout the report, we sort of assume that the benefits from science take 20 years before they sort of turn into some kind of technology. And that's under normal science, not kind of, any kind of accelerated crisis thing. And then they take even longer to spill out over the rest of the world. So the mRNA vaccines that ended Covid quickly, the research that was underlying that had been going on for decades earlier, it wasn't like Covid hit, and then we were like, we got to do all this work to try to figure out how to solve it.

Luis Rodriguez
Right? Make it up from scratch. Right. Okay. Yeah, that seems balanced, I guess, overall, it sounds like you're not that optimistic about this, but you wouldn't rule it out as a thing worth trying or aiming for.

Matt Clancy
Sure, yeah, I think that's certainly something that you would try. Yeah. But I just, you know, it's a big lift. But remember, that's only like one of the two ways that we sort of think about science reducing risk. And that's sort of just moderating how bad things are during this time of perils.

And it's kind of based on this assumption that there's not a lot of forward looking planning where 20 years in advance, we're trying to figure out how are we going to deal with this when it comes. It's more like, when it's here, how quickly would we be able to respond to whatever the sort of novel threat is. Yep. Okay. Okay.

Luis Rodriguez
So the other counter argument that might make science look less clearly bad if you're super worried about extinction risk in the way domain experts are, is this argument made by Leo Aschenbrenner, which is that, like, we could, we might be able to rush past the time of perils or rush through it. So by actually accelerating everything a lot, we can both enter this time of biological perils sooner, but also, like, I don't know, sprint through it to get to the other side faster. So that seems pretty, I don't know, intuitively plausible to me. How do you think about it? Yeah, I mean, I think it's really easy to understand.

Matt Clancy
Like, imagine bad times start in ten years, good times will start after 30 years. That's when we discover, say, a miracle cure. If you have science, go twice as fast. The bad times start in five years, but the good times get here in 15 years. In the first example, you're 20 years in this period where like every year bad people are using this technology.

You don't have a good countermeasure for it and they're trying to hurt you. And so that's 20 years that you have to try to beat them to the punch by investigating and foiling their plans before it happens. And if you have faster science, maybe you only have to do that for ten years and maybe that's better. And so that's the basic argument. And notice that if this is true, then this also reduces extinction risk.

It's not just this kind of conventional non civilization ending kind of bad things happening. The argument would apply equally well to sort of extinction risk. And I think that this is like definitely a plausible argument. I just. So I don't know.

And I would like say, I don't know, 50 50 that that will work. So, yeah, yeah, I guess if I try to. It does feel very theoretical to me. Like it feels like, I don't know, maybe you could also argue that if the time of perils is brought five years earlier, we are less prepared for it. And so it goes worse overall per year during those 15 years than it could have if it had been 30 years long.

Luis Rodriguez
But we spent the next ten years, I don't know, thinking seriously about these risks. So it definitely has always felt to me like a yeah, maybe kind of argument, but not at all. Like, I don't know, like there's some like iron law that says that if we just accelerate everything, it becomes, I don't know, shorter. Yeah, I mean, I think that if you, you wouldn't want to bring things forward if you kind of trust in non technological solutions being an important part of how we deal with this. For example, maybe we actually already know how to make PPE and it's just about building large stockpiles and like, you know, getting a good monitoring system in place that sort of, you know, uses existing technology.

Matt Clancy
And if that's the case, then yeah, you might want more time to build that infrastructure and you would prefer that route. On the other hand, maybe you think that stuff doesn't work that well. Like you can compare the efficacy of lockdowns and sort of social distancing to mRNA vaccines and you think, so the end solution is going to be technological just because we're bad at the other stuff, the social stuff. Yeah, yeah. In which case maybe you want to, you know, you trade getting it here earlier for the possibility of getting a technological solution.

Uh, I think one other interesting wrinkle is just that if you look at the forecasts in the existential risk persuasion tournament, they do seem like the domain experts do seem to think the risks will fall in the long run, but we don't know if that's for one reason or the other. Like it could. Oh, you don't? Okay. They haven't said something like, um, we think it's gonna go up because of x, but then we think it'll go down because we're going to solve this somehow.

Yeah. Not that I could see in the tournament results, and maybe, obviously, if you asked them, they would have views, but. Sure, sure, sure. So I don't want to lean too hard on the fact that they. I think it's notable that they do seem to think that there is going to be something like an end to this or like a reduction in this peril risk at some point in the future.

But I don't know if they think that's because of better preparation by society or because of technological fixes. Yeah, yeah. That's just really interesting. I guess, when I'm thinking of other questions that maybe affect your bottom line results, one question you could ask is whether there's a point at which we're actually just too late to pause the time of perils. So, in theory, the time of perils might be decades from now, and pausing science would in fact, meaningfully delay its start.

Luis Rodriguez
But I guess it also seems possible, given the time it takes for technology to diffuse and maybe some other things, that we could just already be too late. That's right, yeah. And so that's another kind of additional analysis we look at in the report is that, as you said, we could be too late for a lot of reasons. One simple scenario is that all of the know how about how to develop dangerous genetically engineered pathogens is sort of already out there. It's latent in the scientific corpus, and AI advances are just sort of going to keep on moving forward.

Matt Clancy
Whether or not we do anything with science policy, it's driven by other things like what's going on in open AR DeepMind, and eventually that's going to be advanced enough to sort of organize and walk somebody through how to use this knowledge that's already out there. And so that's one risk is that actually it hasn't happened yet, but sort of there's nothing left to be discovered that would be decisive in this question. Would that mean we're just already in the time of perils, you know, the. Time of perils is this framework, right? That how you estimate the probability?

And if you would want to bring that forward, technically we wouldn't be in it, but we would be too late to sort of stop it. Like it's coming at a certain point. And I guess this is also a useful point to say, like, it doesn't have to actually be this discrete thing where, like, we're out and then we're in. It could be the smooth thing, but it gets riskier and riskier over time. And this whole paper is sort of like a bounding exercise.

This is a simple way to approach it that I think will tend to give you the same flavor results as if you assume something more complicated. Sure. Okay. And so I guess just being in this kind of time of heightened risk doesn't guarantee anything terrible. So in theory, like we still want to keep doing, we still want to figure out what the right policy is.

Luis Rodriguez
So if we're in that world where we're too late to pause science, what exactly is the policy implication from your perspective? Yeah, I mean, I think it's like basically there's this quote from Winston Churchill, like, if you're going through hell, keep going. And so, you know, we're in a bad place and there's no downside to science in this world, right. Like, if you think things can't get worse, if you think that the worst stuff has already been discovered and it's just waiting to be deployed, well, then you kind of got to bet on if we accelerate science, there's all these normal benefits that we would get. It's not going to get worse because that's already sort of, that die has already been cast and maybe there's a way to get to the other side and so forth.

Matt Clancy
So I think that's the implication. And that's it. It's sort of a bad, it's a bad place to be, but at least it's clear from a policy perspective what the right thing to do is. Yeah. Okay.

Luis Rodriguez
Do you have a view on whether we are, in fact, already too late? I'm, you know, not a biologist, economist, but it's another one where I'm like, I don't know, 50 50. Okay. Jesus. All right.

Well, that is, that's unsettling, but we'll leave that for now. Yeah, we've actually touched on this a bit, but I want to address it head on now. So whether you use those forecasts from domain experts or superforecasters feels like it could make a really big difference to my own personal kind of bottom line takeaway. Which do you put more weight on, domain experts or superforecasters? Yeah.

Matt Clancy
So again, not a biologist, but I think that there are at least some kind of outside view reasons that I trust the superforecasters a little bit more than the domain experts. And so I talk about three in the report. So one is that the report had this problem. How do you try to provide incentives for people to do good forecasting for events that are going to occur way in the future beyond what they care about? And one thing they tried was rewarding people for being able to predict what other forecasters would think.

If you're a superforecaster, what are the biosecurity experts going to think about this problem and vice versa? And one kind of interesting finding that they got from this report is that people who are better at that task tend to be more optimistic by actually quite a lot when you compare, like the extremes about sort of existential risk stuff. So that's one reason why I sort of lean towards the superforecasters, because they're the more optimistic group of the two. And it seems that being able to forecast what other people think, which we think might be a proxy for understanding the arguments, well, is correlated with being more optimistic. So that's one reason.

Luis Rodriguez
Okay, so that's one. A second reason is that there's this weird optimism or pessimism correlation across lots of domains. So if you are worried about like these, if you're, if you're worried about existential risk from biology, from, you know, extinction and stuff, that group tended to answer. People who had that view tended to also be more worried about AI risk, nuclear risk, but even what they called non anthropogenic x risks, which is stuff not caused by people. So supervolcanoes or meteorites hitting.

Matt Clancy
So basically it just sort of seems like some people are optimistic about our future and they just think all of those risks are smaller. And some people are more pessimistic and they think all of the risks are greater than they are. And so that doesn't tell you who's right and who's sort of biased. But I think that are the super forecasters excessively optimistic or are the domain experts expertly pessimistic? I can see reasons why I would lean towards thinking the domain experts are more pessimistic.

So the superforecasters, like, why would they be disproportionately optimistic? Maybe all of society is, and they're pulled from society, but this is a group that receives feedback on forecasts. If they sort of have a general optimism bias in their life and on other forecasts or whatever, they're a group that's going to get feedback and an opportunity to at least correct that bias. Domain experts aren't necessarily in the forecasting game all the time. They're not necessarily going to be getting that kind of feedback.

And there's this plausible selection story where if you are worried about these things, you might choose to learn about them and then become an expert, try to stop them. And to be clear, I think it's good that people try to stop them. And it makes sense that the people who are worried about them would be the people who do. But that's a reason to maybe think that the forecasts are, you know, why I trust the superforecaster forecasts a bit more. And so the third reason is this really scant evidence.

We have like two studies that are kind of related to the track records of these two different groups. So one of them was this 2014 report by Gary Ackerman that described this research project to get domain experts, who in this case were like futurists and biosecurity experts, to forecast the probability of biological weapon attacks over the period 2013 to 2022. So this ten year period, and so 2022 has now resolved and we can sort of see how did they do. And the problem is that there was basically like one attack in that period. And if you add up their lower bound, they gave a range of estimates, lower bound, upper bound, middle and so forth.

And their lower bound estimates add up to 1.23 attacks sort of forecast over ten years. So their lower bound estimate is about right. The sort of median midpoint probability is like closer to three attacks they would expect over that period. So anyway, their lower bound is more correct than their median value. And it just so happens that in domain experts, the domain experts also have like bounds.

And the lower bound estimates for the domain experts that were asked in the existential risk persuasion tournament are close to the median superforecaster estimates. So, okay, that's one reason there. But, you know, small numbers, hard to. It's like one attack. If there had been one more attack, suddenly the results would change.

So you don't want to put too much weight. Sure, sure. But some weak evidence. Yeah. And then the other one is one of my colleagues, actually a new colleague, David Bernard, has a dissertation chapter where they looked at how do superforecasters and domain experts forecast stuff out over several years?

And in this case, the domain experts are not biosecurity we're not talking about x risk, anything like that, we're talking about economic development. But that's what the data he had is, is sort of like there are all these different economic development interventions and they ask academics who sort of are experts in that field, what do you think is going to be the impact of this three years later, five years later? I think maybe up to nine years later is the furthest out they go and they compare that to superforecasters. And basically the superforecasters score a bit better. They are by sort of standard scoring measures.

But the domain experts don't do terrible either. Their midpoint is like accurate, but they're just too confident about kind of things. Whereas the superforecasters are a little bit more, you know, they have a wider range, maybe because they've got more experience. So anyway, that's the third, and like I said, not a ton to lean on, but it is something we looked into. And I think you add that to the intersubjective probability stuff and sort of this correlation of pessimism or optimism and that leans me towards thinking the super for if I had to pick one, I think the superforecasters are more.

Right. I don't feel like super confident about it. I wouldn't bet the farm on it or whatever. But three and four chance that they're the ones I go with. Okay, three and four chance.

Luis Rodriguez
Yeah. Okay. That's maybe a. Well, I wish I'd put a number on it in advance to check what I would have done, but I guess we don't have to pick. I don't know, I guess it seems like we can use them as a range or people could come up with their own range that put some weight on domain experts and some on superforecasters.

Matt Clancy
That's true. It's one interesting thing is that some of the super forecasters do have are more risky. And so if you go to the 90th percentile, the 10% who have the highest, who think the risks are highest, they're pretty close to the biosecurity average. And if you look at the lower bound on the biosecurity experts, like I said, they're not too far off from the superforecaster median. Right.

Luis Rodriguez
So it's not even a perfect, I don't know, distinction, but yeah. Interesting. Okay, so you lean towards super forecasters, I guess. Yeah. This just seems like such an important consideration.

If someone maybe like me is like still pretty divided. I'm really sympathetic. I think overall I'd put more weight on the superforecasters, too. But I'm also like, I don't know, maybe the superforecasters are given, or kind of a lot of what makes them really good comes from the fact that they're predicting things within kind of the window of things we've seen recently. And maybe the kinds of things we're talking about are just not the kinds of things that we'll see recently.

But that doesn't mean they don't happen. It just means that they happen so seldom that if we make predictions based on the recent past, we won't see them coming. That seems plausible to me. And so what should we do if we're just really uncertain about which to pick? Because in some cases they'll point to really different outcomes.

Matt Clancy
Yeah. So the answer we kind of come up with is sort of like a cheat because it's like a secret third option, which is like, where do I come down on this? Like, I'm actually very confident faster science is a good thing. I think that's because, like, one, I think the superforecaster forecasts are, you know, probably more likely. But also I think there's like various, even if the other guys are right, the biosecurity experts, which is not, I think it's still reasonably likely.

I think there's a decent shot that faster science would actually still be good, either because we're too late or more optimistically because they can develop these solutions faster. So for those reasons, I'm sort of like, all right, I think I'm really pretty confident that faster science is a good idea. But also the secret third option is that we can do stuff to mitigate downside risks, and those are very important. And so we can kind of have a two pronged strategy. And, in fact, that's sort of what I've settled on.

And what we settled on in the innovation policy is like we should, in general, try to accelerate scientific progress, keep an eye out for projects that seem like they have extreme downside risk because those are the ones that are really bad. But also, you can have a separate set of projects that are trying to reduce technological risk. And so, like an analogy is you can think about climate change, and you could try to mitigate climate change by slowing economic growth because the two are maybe connected. Or you could try to reduce climate change through targeted renewable energy policies or carbon capture stuff. And in another basket of your policy space, have stuff that you think makes economic growth better, like R and D policy, for example, and so forth.

And so that's kind of the approach that. That, I think, is my ideal when I come out from this report. Okay. Is that kind of close to what you would have thought was reasonable before you'd done this? Yeah, I think that before we had started this project, I really was unsure what we would find.

Like, when you're building a model and you're going to put parameters into it, they could give you different answers. And so I think it's possible we would have ended up in a space where open Phil is full on. On sort of differential technological development, trying to focus on that. I think it was never super likely. There was always a small chance, but it's super not that likely.

We would just sort of completely put these issues to bed, because when we started, there were these sort of compelling arguments that people had and so forth. But anyway, an outcome from this report is, I think, not just this report, but other discussions that everyone at open Phil, for example, is not necessarily confident, like me, that faster science is net positive. But I think most people now think that faster science plus doing some work to try to mitigate risks is like a net positive. So that bundle is a good thing. Cool.

Luis Rodriguez
Okay. Yeah, actually, that. That leads to my next question, which was just like, how has this work been received by your colleagues? Have people tended to agree with you? Have they been skeptical?

Matt Clancy
Yeah, like I said, disagreements sort of remain, but I think people are. They don't think that accelerating science is, like, probably a huge negative. If it is negative, and if you add in some of this stuff to try to make stuff safer, you know. So now our mantra, I think, is sort of like safe acceleration. So, you know, you check the crosswalk for pedestrians and then you hit the grass.

Luis Rodriguez
Okay, nice, nice. Cool. Yeah. Are there any other ways the report has kind of influenced your strategy, or is the kind of core thing safe acceleration? I think that's the core idea, I'd say.

Matt Clancy
But while we were writing this report and before we had finished it, we had this very cautious kind of approach, which we were going to focus on grant making, that we were very confident we're going to kind of, by default, assume grants could be bad just while we were working on this, and had to be convinced that they couldn't be. And so, for example, we focused a bit of work on social science research, because I'm a social scientist, and we're just, nobody's that worried that people will misuse social science for evil. After the report, like I said, I think we now are more, by default, kind of not worried. Faster science is probably good. Send us your ideas if you're looking for, if you have ideas for doing that.

But we basically now consult with the biosecurity pandemic team about what we're thinking, where we're going, and give them a chance to flag stuff that might be a big, have a big unexpected downside risk that I'm missing, because that's not my area of expertise. And like I said, we also have this new sort of sub strategy that we launched that we haven't done too much in yet, but is going to try to look for opportunities in this space, too. Yeah. Cool. Given all that, what kinds of levers do you have available to you for meaningfully improving science?

Luis Rodriguez
Yeah. What's going wrong when people try to do science that could go better? Yeah. So this is going to sort of sound stereotypical coming from an economist, but think about the contrast between how we choose what science to fund versus how we choose other sectors and businesses to receive funding in other sectors. You've got a whole advanced banking system.

Matt Clancy
There's an entire, like, field and discipline of finance. People's bonuses are tied to how well they make investments. And like, tons of people pour tons of energy into figuring out how to do this. Science is super high return and super valuable and lays the foundation for a lot of future technological progress. But the way that we choose what science to fund is comparatively less developed.

We might say it's anonymous peer review panels. People's performance is not necessarily tied to how well they do. There's lots of concern among people that they're sort of excessively risk averse, and maybe things can get captured. So basically, one sort of area is, are there better ways to do this? It's not a given.

Maybe we actually have landed on the best you can do for a really hard problem. But we can try alternative ways and we can sort of try to study how well they work. And that might be one class of problems. Another class of problems that I kind of alluded to earlier is like science is increasingly specialized and getting bigger and bigger all the time. So it's hard for people to keep up on the literature.

It's hard for people to keep up even on their own narrow discipline. And that could be a problem because a lot of the best ideas are maybe cross pollination, where somebody develops an idea and it gets used in an unexpected context and you get kind of a breakthrough that way. It's if everybody is just struggling to keep up with their own specialty, maybe it's hard to sort of understand and keep up with what other people are doing. And that's one reason we're sort of experimenting with these living literature reviews which try to make, which try to synthesize the literature and make it accessible to sort of non specialist audiences. But there's lots of other things you could potentially try.

A third topic is maybe the incentive to produce reliable, replicable research. There's the whole replication. Crisis sciences provides really strong incentives to push the frontier, discover things that nobody has known before. You get a lot of sort of social credit and prestige, and maybe you can get grants and publications from doing that. It provides less strong incentives to reproduce and make sure other people's work sort of stands up and to publicize when it doesn't and so forth.

And so we make some grants that try to sort of make progress on those issues, too. I guess a final one that's often not thought of as a science policy issue, but is actually like closely related is immigration policy, especially high skilled immigration, because there's the sort of talent pool for science could come from anywhere in the world, but the resources to do science and to learn and to sort of network with scientists is not as evenly distributed. And so you can either try to distribute the scientific resources more equitably, which is another thing I'm sort of interested in looking at in the future, or you can try to reform immigration policy to let people from, with the aptitude and interest in science go to where the science can be done. Cool. Cool.

Luis Rodriguez
What are some examples of things that you think might make it work better? You've just listed a bunch of problems, and I imagine the answers solve those problems. But are there any kind of specific ideas for how or things that people are already doing? Maybe, yeah. So one grantee that I like is the Institute for replication, which is looking to replicate a bunch of economics articles that are published in top journals or a set of journals.

Matt Clancy
And actually they've expanded to lots of other disciplines beyond just economics, but I'm biased because I'm an economist. But anyway, the thing that's kind of interesting about them is rather than sort of cherry picking and having this ad hoc replication system where somebody, when you get replicated, suddenly your heart freezes up because you think, why do they have against me? Why are they coming for me? It's just sort of like, well, if you publish in these top journals, we want to make it the norm that somebody's going to try to replicate your work. And replication in this context basically means take your data and think about what is the right approach and then make sure they code it up and try it themselves and variations on that.

We're not going to rerun experiments because we're usually using government data in this context, but anyway, it's making it a norm. It's not something that they're out to get you. So you get a better baseline of what's normal, what's not normal. And also, if you're a policymaker and trying to devise a better policy based on the latest research, if you know that everything that's in sort of these set of top journals gets replicated, well, then you can go look for it and see did it replicate? And that just becomes also a norm too.

So I think that's one example of a. A solution that we like. And I think they also have a nice system for kind of moderating the interaction between the replicator and the original authors, which could be contentious if it's in the sort of previous kind of thing. And that could make it a disincentive to do because the only kind of people who want to be involved in replicating stuff are people who have kind of an appetite for having a contentious conflict or something. So that's one example of a grantee we like.

Luis Rodriguez
Cool. Yeah, that is really, really exciting. Shout out to the Institute for replication. Huge shout out. Now you've just piqued my interest.

If you had another example, I'd be super curious. Yeah. So I think the other. I don't have another as crisp example, but I think a general approach that we're very excited about is finding partners who fund or in some ways run the infrastructure of science. So maybe it could also be like journals or so forth, and finding partners who are interested in experimenting with the way they do things, pairing them with social scientists and economists so that they can learn carefully how it works.

Matt Clancy
I mean, science has this problem where it's hard, I think, to casually infer whether something is a good idea or not because science takes a long time for results to sort of be clear how they're sort of viewed by the community. Outliers matter a lot where, like, sometimes maybe you only care about getting one home run and you're willing to take a lot of losses, but that means you need a very big sample to sort of tease out the home runs. And so you kind of need, I think, to partner with social scientists and kind of think about this in a bigger picture way than just like, we're just going to try something and see if it works. You might have to actually work with people and then we can share that information with other people. And if we aggregate up a lot of other people trying similar stuff, we can get kind of the big sample sizes we need to sort of improve the system.

So I think that's the approach that I would love to promote. Okay, let's leave that there for now. So we've been talking about metascience, and meta science is kind of about trying to accelerate scientific and technological progress. But maybe the biggest thing that we can do to accelerate scientific progress is to solve artificial intelligence. Then we could have basically a bunch of digital scientists doing all the science we want.

Luis Rodriguez
And this idea is actually related to explosive growth, which is the subject of an interview we did with Tom Davidson, one of your former colleagues, where he made the case that developing AGI, so AI that can do 100% of cognitive tasks at least as well as humans can do, could cause explosive economic growth, leading us to a honestly unrecognizable world. But I think you're more skeptical of this. Um, I think. I think that you think that there's a ten to 20% chance that this happens, and that's from a kind of write up that you've done. Um, that we'll talk more about.

And I guess just for context. Yeah, we're talking really specifically about a world where AI can do most or all the things humans can do. So things like running companies and all the planning and strategic thinking that comes along with that, designing and running scientific experiments, producing and directing movies, conducting novel philosophical inquiry. And the argument is that AI at that level will cause GDP to grow at least ten times faster than it has over the last few decades. So potentially 20%, which would mean we'd get 90 years of technological progress in the next ten years, which really is kind of mind boggling.

So a big reason that you're skeptical of this is that the people who think explosive growth is likely are putting a lot of weight on the fact that economic models predict explosive growth. If you just plug in the fact that when you have a much bigger AI labor force, then those models predict that, yeah, we just get a bunch more growth, because right now we're kind of bottlenecked by not having more people that can do more things. But the AI labor force could be huge. Can you explain, I guess, that that kind of core argument in a bit more detail in case I'm missing anything? Sure.

Matt Clancy
Yeah. To clarify a couple things at the beginning, like one. Yeah. So this is nothing to do with the report anymore. This is now a different interest I have, I think, at openfill, and I don't know.

I find it pretty interesting. And I've chatted with Tom, for example, a lot about this, and it's great. And I think it's really interesting. I also, just to clarify, I'm not like, oh, it doesn't matter, it's no big deal. I'm like, my modal guess or my, my average is, yeah, it will improve in productivity, but I think I'm a lot more skeptical than some people about like this explosive growing at 20% per year type thing.

Like, I'd be like, oh, 5% per year. Wow, this is crazy. But like 20% per year, I'm a lot more skeptical. So let me lay out, like a little. There's like, there's an economics model that I think is really helpful for thinking about this.

It's mathy, but I'm going to give you the simplified, mostly non mathy version that I think gives you the intuitions for how, like, the different ways that AI could matter. So perfect. Just imagine that, like inventing something, there's, we're trying to invent something new. And inventing has a thousand steps that need to be completed. There's a thousand different tasks.

Everything from, like, discovering the fundamental scientific theories that underlie the principles that this thing relies on to a bunch of testing. And then there's also stuff like fine tuning it, figuring out how to manufacture it, and even diffusing it to people. Because productivity growth is when people are using technology. It's not when the technology exists, it's when people are using it to do things in the world. Okay, so we got a thousand tasks to get there, and we're going to assume that you can, you know, you got to do all this stuff, but you can maybe substitute to some degree some of them.

So you can do maybe the scientific theory really well, and then you don't have to search as much or you don't have to do as good a job on downstream parts. But there's some degree to which all of this stuff has to get done, but maybe to different extents. For simplicity, we got 1000 different tasks. Let's imagine we've got 1000 scientists and inventors, and each one of these takes about a year on average. And so we're going to get this all done in a year because they can all work in parallel at the same time.

So that's their first data. But now we're going to add one wrinkle, which is that invention gets harder and harder. And this is a pretty robust finding that it takes more and more researchers to increase productivity by a certain amount. So let's just, for simplicity, say every time you invent something, the next invention takes 10% more like cognitive effort. And so you're going to either need to spend more time thinking about each of these tasks, or you got to put more people on it, and then they can, like, team up together and so forth.

So if we wanted to invent one invention every year, we actually have to increase our labor force by, like, 10% per year to sort of offset this things getting harder stuff. But there's another route we could take besides just getting more people, which is that we could get robots to do some of the jobs. So suppose that we're able to figure out how to automate 5% of the jobs that humans are doing, and we can put robots on those tasks instead. That has an interesting effect, which is that now the 5% of the jobs people were doing, robots are doing. So those people are kind of like free agents now they can do something else.

And so, in particular, they can go work on the other stuff. And so that gives us 5% extra people to work on the other tasks. And now if we add 5% to our overall labor force, we've automated 5% of the tasks. Now we get it. That's another way that we can get to an extra 10% of people working on the stuff that only humans know how to do.

And that, again, gets us to one invention every year. So in the first step, it's because we're just increasing the population of scientists by 10% a year. The second, we increase the population by 5% per year. But we keep shifting people away from stuff that they don't, that they used to have to do. Like, maybe they had to do their regressions by hand and the statistical analysis, and now a machine does it like that and so forth.

So that's actually, like, I think, a pretty good description of what has been happening for, like, most of the 20th century. Okay? Like, we have been steadily figuring out how to hand off parts of our cognitive work or actual work to machines. Like we invent, Excel, word, you know, Google scholar, lots of stuff. And people focus on different parts of kind of the invention process, and they don't spend as much time doing regressions as maybe they used to do.

And that is consistent with steady growth and also increasing the population of researchers working on stuff and so forth. Now, you can say this is your kind of core model. How does AI fit into this? Why would AI in this framework, lead to much faster growth? And there's three options.

You've got one is you could just flood the zone with lots of robots on the tasks that robots can do. Remember, we sort of assumed that there's a bit of substitutability before these tasks. So maybe you can put like a thousand times as many people, like digital minds, working on the things digital minds are good at. And maybe that means you only need to do the other stuff 10% as well. Well, that would mean the humans could do their part in one 10th the time and they'd invent at ten times the pace.

And maybe this works because AI is really scalable. Like once you train a model, you can copy it and, and deploy a lot of effort on this. So that's one option is sort of this flooding AI effort onto tasks they can do. A different route you can go is that you could have this big surge in what share of the tasks the AI can do. So instead of figuring out how to automate 5%, maybe they automate 90% of the stuff humans are doing, and that frees up 90% of the people to work on the last 10% of tasks.

It's kind of like you've ten x'ed the labor supply for that small share of things that only humans can do. Again, maybe they finish ten times as fast, and that's how you can get ten times the speed of growth or invention. And maybe this works because we think AI is just super general, it can figure out anything, and so it can do lots of tasks. And then the third option is if you can actually just eventually automate all of those tasks so you don't need humans at all, then you can get these arguments where this can lead to sort of a positive feedback loop. So imagine the thing that we're inventing.

So I'm adding an extra step in our thought experiment. Imagine the thing we're inventing is a robot factory, okay? And every time we make a new factory, we discover a new way to do it. The factory is 20% more efficient than the last one. And that means basically like, all right, now we can, we can build 20% more robots than we had.

We only needed 10% more because invention gets harder. So we've got more robots than we need, we can get it done even faster than we did last time, and then we get to the next step and we have another 20%, and we can get it done even faster. And so you just get this like acceleration until you reach 10%, or like ten times the speed you were at and so forth. So those are kind of the three main options like flood the zone on the tasks robots can do with just tons of cognitive labor, surge in the share of tasks that are automated or just doing everything. And the last thing is just that they can kind of interact with each other.

So you might get, maybe the thing that they get better at is automating. Maybe you're inventing systems to ways to automate better. And then there's also maybe stronger incentives to finish up the last rump of tasks that humans can do, because then you could get this positive feedback thing that would be really valuable. So that's the argument for explosive growth. Right.

Luis Rodriguez
Okay. And then my understanding is that you, at least you're more skeptical of it. You don't put no weight on it, but you think it's less likely than at least some of your colleagues. Yeah. Why is that?

Matt Clancy
I basically, yeah, I think that it's a possible scenario, but I think that people underweight frictions that would cut against it. And remember, I think it is a useful direction for the direction we're going, but less useful for predicting the magnitude of ten times growth. Okay, so you're like, yes, AGI is going to be a big deal. People need to update toward. It's going to change the pace of economic growth in some way.

Luis Rodriguez
And it will probably, yeah, it's going to get a little bigger, but the claims that it's going to be explosive are the thing that you're like, yeah. Possible, but I think underrating a lot of things. So to take one, this first option, which was like, you flood the zone with cognitive labor on certain tasks. We kind of have run that experiment. Like, there were tasks that computers can do.

Matt Clancy
Moore's law led to just this astronomical increase in chips and, like, essentially cognitive power on those kinds of tasks. It didn't lead to a noticeable large increase in the rate of economic growth. And that kind of implies that there are sort of really strong limits to how much you can substitute between these tasks. And so that means you get bottlenecked on the things that only humans can do. Even if you can do the other stuff a thousand times, like throw a thousand times or a million times as much labor at it.

And, like, a million or a billion is like. Like kind of the magnitudes we're talking about in terms of Moore's law, it doesn't translate into less need to sort of do the other stuff. So that's the first option. But, you know, there were other routes to explosive growth, and one of them was this surge in automation that we're just going to like obsolete or not, we can replace humans in like 90% of the tasks or something. So one challenge there is if it continues to be the case that you can't substitute very well, and the stuff that humans need to do really is a bottleneck.

Well, even though you have all this free labor or like freed labor that can go work on, that might take them time to be retrained and go work on that. Like, tasks are very different. Like if, especially if you're imagining moving from a scientist to moving to more like the entrepreneur stage of the innovation set of tasks, those are pretty different skills, but I think more importantly, like, the world is just really full of like details and frictions that are hard to see, like from 30,000ft. But they're just omnipresent throughout the economy and they're not all the kind of things that can be bulldozed through with just like more intelligence. Like can you get.

Luis Rodriguez
Yeah. Can you give some examples? Yeah. So kind of factors that can matter is like, it might take time to collect certain kinds of data. It takes like, you think of a lot of AI advances are based on self play, alphago, or they're sort of self playing, and they can really rapidly play a ton of games and learn skills.

Matt Clancy
If you want to try to, if you imagine what is the equivalent of that in the physical world? Well, we don't have a good enough model of the physical world where you can kind of self play. You actually have to go into the physical world and try the thing out, see if it works. And if it's like testing a new drug that takes a long time to see the effects. So there's time.

It could be about access to getting specific materials, or maybe physical capabilities could be about access to data. You could imagine that if people are seeing jobs are getting obsoleted, then what? They could maybe refuse to share and cooperate with, sharing the data needed to train on that stuff. There's social relationships people have with each other, there's trust when they're deciding who to work with and so forth. And if there's alignment issues, that could be a another potential issue.

There's incentives that people might have. There's also people might be in conflict with each other and be able to use AI to try to thwart each other. You could imagine legislating over the use of IP, intellectual property rights, and people on both sides using AI to not make the process go much faster because they're both deploying a lot of resources at it. The short answer is if there's lots of these frictions and any particular application. I think you see this a lot in attempts to apply AI advances to any particular domain.

Like, oh, it turns out there's a lot of sector specific detail that had to be ironed out, and there's kind of this hope that those problems will sort of disappear at some stage and maybe they won't. Basically, I think that one example that I've been thinking about recently with this is imagine if there were like, if we achieved AGI and we could deploy 100 billion digital scientists and they were really effective and they could discover and tell us, here are the blueprints for technologies that you won't invent for 50 years. So you just build these and you're going to leap 50 years into the future in the space of one year. What happens there? Well, like, this is not actually as unprecedented a situation as it seems.

There's a lot of countries in the world that are 50 years behind the USA, for example, in terms of the technology that is in wide use. And this is something we looked at, I looked at in updating this report. It's like, what are the average lags for technology adoption? And so why don't these countries just copy our technology and leap 50 years into the future? In fact, in some sense, they have an easier problem because they don't have to even build the technologies they could.

They don't have to bootstrap their way up. It's really hard to make advanced semiconductors because you have to build the fabs, which take lots of specialized skills themselves. But this group, they don't even have to do that. They can just use the fabs that already exist to get the semiconductors and so forth, and leapfrog technologies that are intermediate, like cellular technology instead of phone lines and stuff like that. And they can also borrow from the world to finance this investment.

They don't have to sort of self generate it. But that doesn't happen. And the reason it doesn't happen is because of tons of little frictions that have to do with things like incentives and so forth. And you can say, well, there's very important differences between the typical country that is 50 years behind the USA and the USA, and maybe we would be able to actually just build the stuff. But I think you can look at who are the absolute best performers they were in this situation.

They changed their government, they got the right people in charge or something, and they just did this as good as you can do it. Like the top 1%, they don't have explosive economic growth. They don't converge to the US at 20% per year. You do very rarely observe people growing at 20% per year, but it is always because they are a small country that discovers a lot of natural resources. It's not through this process of technological upgrading.

Luis Rodriguez
Is that true of asian countries that I feel like have had really fast economic growth and it seems at least partly driven by technology. Like adopting new technologies. Yeah, no, I think these are examples of like, these countries that are performing at sort of the best that you can do, and they're growing at 10% per year, maybe one year at like 15%, but not like a continuous 20% per year kind of thing. So, like, in fact, if you wanted to say 10%, I'd be like, oh, well, you know, that's. We've got some precedent for that from China.

Matt Clancy
Maybe this is semantics, but, like, 20% is the number that is sort of thrown around. And it's just not, I think, like it's, it's very ambitious, let's say that. So, yeah, so that's, that's, that's the other, that's another example of just that. I think these frictions matter. And one last point is that back in the early days of economics like this, Bob Solow and other people were coming up with sort of the first models of technological progress.

They had these models that made predictions like this that they basically said countries that are really capital poor, the return, there's a declining marginal product of capital, and so these countries are going to get a lot more investment and they're going to converge and grow at a faster rate. And you could have said at that time, well, you could be Matt Clancy at that time and be like, well, actually there's all these frictions that make that kind of thing hard to pull off. And somebody could have said, yes, but Matt, you're underestimating that, like, the order of magnitude difference in, like, the capital levels of these countries is so massive that this is going to overwhelm any little frictions. But it turns out that was wrong. Like, the frictions really did add up and a small number of countries did grow fast, but they didn't grow, like, necessarily as fast as sort of these people as being predicted might be the outcome from AI, which is also, I think, based on sort of a simplified economic model of the world.

Luis Rodriguez
Okay, so let's say that there are these frictions and they're the kinds of things that meant that even though there are some countries in Asia that got to learn about a bunch of new technologies and implement them, kind of all at once, they saw big growth, but not huge growth, and they didn't see it super, super sustained. I think my, like, I don't know, the Tom Davidson on my shoulder who, yeah. Has. Has made this big case for explosive growth on the show before. Um, is like, if that's the case, then the priority of AI systems, um, or humans making AI systems to optimize for different things, um, is going to be to iron out those frictions.

So I think. I think Tom argued that, like, for experiments that involve humans, we'll do a bunch of them in parallel. They'll be. They'll be so good at kind of optimizing these processes that, like, we actually can do much better than. I don't know, than you'd imagine, based on how long things take and the fact that they're just biological processes that have to happen.

And if tons of resources are just thrown at the bottlenecks in particular, do you still feel really, really skeptical that those bottlenecks will just really slow everything down? Yeah, I'm unsure. That's why, like I said in one interview, like, ten to 20% chance that this all happens, which I think is, like, this weird middle ground between people who are really enthusiastic people, really skeptical. But, like, I do think I have the experience of running a program where we think there are these important problems with how things are going, and it's hard to just convince everyone to fix them, you know? Like, I think that we have long had incentives to solve various problems that are very important and that the market is really aligned on, and sometimes these are just hard problems, like, we don't instantly solve things that are.

Matt Clancy
That there is a strong incentive to solve. Like, that is not a natural property of the world, I would say. Okay, okay. So if you just wait long enough, do you eventually think we get this? Because maybe part of my belief is that these are really, really hard problems to solve, but having this much bigger labor force with a higher kind of ceiling on how intelligent the kind of minds working on these problems can be, at some point, there will be breakthroughs that mean that we can solve bottlenecks that we've not been able to solve before.

Luis Rodriguez
And so I'm not, like, I don't know. I don't have that much confidence in any particular timeline, but I feel like I have more confidence in the. Obviously, I don't think it's an inevitability, but, like, some high likelihood on this thing happening eventually. So I think that I have. I shared the view that, like, it seems very plausible that eventually we figure out how to automate lots of this stuff.

Matt Clancy
And machines are doing it all and doing it maybe better than people. And I think that if we get into a world like that, we're going to be in a world that's like probably a lot better off and richer and, you know, and so forth. I think the thing that I disagree a little bit about is that whether that will imply we end up with explosive growth at that point. Because if it takes a long time to get there, if these are hard problems, that by the time we get there, we're a rich society, and rich societies have the luxury to put a lot of value on, I don't know, things that are not survival relevant or so forth. We may think that lots of the characteristics of human beings are the things that we care about.

Like, oh, I want to be talking to something that I know is sentient. You know, like I can listen to like you think about now, everybody went to go see like, who, tons of people who were able to went to go see Taylor Swift in concert. They can stream all of her stuff anytime they want, but they like the in person human experience and they paid a lot of money for that. And a society that gets richer might. Think like, oh, well, we still value that.

Yeah, it's true, but we just value the stuff that the robots can't do so well. And that's a good place to be. It means you're very wealthy and have the luxury of indulging in these kind of preferences that are obscure. But it can still mean that you don't get explosive economic growth. Because maybe it's just like the kinds of stuff that explosive economic growth could give us is not the kind of stuff that people are that interested in getting.

They don't want to build islands that look beautiful, or maybe they want to a little bit. They want to be on the island where all their friends are and there's limited space or something. I mean, I'm not sure that's a great example, but that's kind of the flavor of it. Yeah, yeah, yeah. Okay, so let's say you're thinking about the next two to five years.

Luis Rodriguez
What do you imagine the world looking like if we were headed toward more explosive growth? I mean, maybe you dont even think the next two to five years is a plausible time span to start getting indicators of that. But what would be early signs, in your view, that the explosive growth thing is more plausible than you think now? Yeah, I mean, I think that my theory is based on frictions to adoption and so if it turns out that were just seeing adoption happen really quickly in lots of places, like not just in one place really quickly, but I imagine that there are going to be some places where there's not much friction and then those places might adopt quickly. But if you see it across a wide range, that's something that I'd be like, oh, I guess this.

Matt Clancy
Quantitatively, maybe these things are there, but they just don't matter that much. And so that would be one view. Another thing is that a general trend in the economics of innovation is just that science is getting harder. And so people are assembling bigger and bigger teams of specialists to solve problems. And you can just see the number of co authors on papers keeps going up.

If AI is substituting for an increasing share of tasks, one very concrete indicator is the size of teams going down in science, because people are just offloading more of their co author duties basically to an AI. Those are two examples of things that I might see that be like, oh, well, maybe this is coming, but even the second case I'd want to see, I think, I think both. The first one, adoption across lots of sectors is maybe a better indicator than the second, which is specific to idiosyncrasies of science. Okay, so those are things that you might see going forward that might make you think explosive growth is more likely. Is there anything that you've learned, I don't know.

Luis Rodriguez
Since you first started thinking about the topic of explosive growth, that's kind of changed your thinking about it in general? So I'll give one example that's maybe more confident in my original views, and one thing that's maybe less confident. So on my original views, like one example that's often given is that we have a precedent for these rapid accelerations in the dawn of agriculture and in the industrial revolution. So let's focus on the industrial revolution. Like where we went from, growth plausibly increased.

Matt Clancy
Like there was a long run average over the several preceding several hundred years. Maybe it's like 0.1%. And after the industrial revolution, it went ten times faster to 1%. Okay. And actually, I think I looked this up, it was like, went to 1.6% on average over like a longer time period.

So this is sort of often given as proof that these kind of discontinuities can happen. And I think one thing that should give you a little bit of hesitation about that is like, if you look at our best estimates for what was economic growth back before the industrial revolution, like from 1200 to 1600 or so, and this is, I'm using Angus Madison data, who's like, what? This economist who had these really long runtime series. How often did you observe growth above 1% per year? 1.6% per year, which is the stuff that we observe in the era of the industrial revolution.

And the answer is, it's like one third of the time it happened all the time that you saw this fast growth. You even saw with relative frequency very long runs where average growth for maybe 30 years averaged industrial revolution levels. It's just that it gets knocked back. They didn't persist for hundreds of years. You'd get fall back and things would go up and down a lot.

So it was noisy. Today, we don't see it's not the same where sometimes people grow at 20%, but then it gets offset a years later by 5% or 1% or negative 10%. And so I think that's one reason to think that this is a more unprecedented prediction than past. That's one thing that's updated me in favor now on the other direction. You know, I don't know if this is so much about explosive economic growth, but like, I am personally like just the advance of large language models.

I'm very glad that I, before I joined open Phil, I was working full time on writing new things under the sun, which is this, like living literature review, summarizing and synthesizing literature. And I am glad that that is not my full time job anymore, because I don't know where that's going in the next couple, couple of years. And this grants program that we run does give people financial support to write them, but with the general idea that people are going to do it part time and they're going to have another role, because I think that that's something that, like, you know, it seems like that job is maybe getting automated in the near future. So I'm keeping my eyes on it. It's not there yet, but coming down the road, I think you can kind.

Luis Rodriguez
Of see it makes sense to me. I also. I also worry about my job. Okay. So if you want to learn more about different sides to kind of the debate on explosive growth, I really, really recommend this article that you wrote with Tammy Bezeroglu in Asterisk magazine called the great a debate about AI and explosive growth.

So it's kind of written as a dialogue between the two of you, and if you prefer audio, Matt and Tamme recorded themselves performing the parts that they wrote for themselves as part of this dialogue. And that's available on new things under the sun. For now, we've got time for one final question. Matt, what is your favorite thought experiment? Yeah, I'll stick with one that is sort of generally related to our themes about technological progress.

Matt Clancy
Nice. So there is this argument in time travel, or this paradox that comes up in time travel called the bootstrap paradox, which is like, can you. They kind of do this in Bill and Ted's excellent adventure, if you've seen this, where he's like, I'm going to, in the future, find this key and put it here. And then he pulls up the rug and the key keys there, and then he just has to remember in the future to put it back. So you could imagine that as a general thing where you could say, I intend in the future to give myself this object, and then I'm going to show up right now and give it to myself, and then there you have it.

And then you just have to make sure you give the object to yourself in the past. And there's this question of, where did that object come from? And so the variation that I think is kind of, I don't know, interesting about, is time travel possible? Is like, could you do this with information about time travel itself? So could you form this intention?

Like, I'm going to first learn how to build a time machine after being tutored by my future self, then I'm going to devote my life to implementing the instructions that I have been told by my future self. I'm going to build this time machine according to the specifications. I'm going to take notes, and then maybe there's some tinkering experiment, whatever. Anyway, I'm going to figure it out, and then I'm going to use that to. To go back and give myself the specifications about it.

And so there's. In principle, I could do that right now. I could just form this intention that this is what I'm going to do. And nobody's coming through the door to tell me how to build the time machine. So, you know, does this imp.

Does this imply, like, time travel is not possible, or is there some other hole? Anyway, that's one that I thought of. And, you know, where does technological progress come from? Hopefully not just, like, from the void like that, but maybe sometimes. Yep.

Luis Rodriguez
I really love that. Yeah. Interesting. I wonder if you're not setting the intention hard enough, like, you just have. To really, really want it.

You're being too cheeky with it. Yeah. You've got to actually set that intention. I don't believe you've done it yet. Somebody out there please try it with the serious effort that is required to make it work.

Absolutely. Yep. Cool. Well, that's new to me. Thank you.

Thank you for that. My guest today has been Matt Clancy. Thank you so much for coming on. It's been really interesting. Thank you so much for having me.

If you enjoyed this episode, you might be interested in speaking to our one on one advising team. At 80k, we measure our impact by how many of our users report changing careers based on our advice. And one thing we've noticed among planned changes is that listening to many episodes of this show is a strong predictor of who ends up switching careers. So if thats you, speaking to our advising team can help massively accelerate your pivot into a new role or career. They can connect you to experts working on our top problems who could potentially hire you.

They can flag new roles and organizations you might not have heard of and point you to helpful upskilling and learning resources, all in addition to just giving you feedback on your plans, which is something we can all use. You can even opt into a program where the advising team recommends you for roles that look like a good fit as they come up over time. So even if you feel like you're already clear on your plans and have plenty of connections, it's just a really good way to passively expose yourself to impactful opportunities that you might otherwise miss because you're busy or not job hunting at any given point in time. Or if you're maybe underestimating your fit for for a particular role. And like everything we do at 80,000 hours, it's all totally free.

The only cost is filling out an application. And while applications often seem daunting, our advising application form only takes about ten minutes. So all you have to do is share your LinkedIn or CV, tell us a little bit about your current plans and which top problems you think are most pressing, and hit submit. You can find all our one on one team resources, including the application at 80,000 hours.org slash speak. If you've been telling yourself for a while that you'd apply for ADK advising, or you've just been on the fence about it, now is a really good time to apply because this summer we'll have more call availability than ever before.

All right, I'll leave you with that. The 80,000 Hours podcast is produced and edited by Kieran Harris. The audio engineering team is led by Ben Cordell, with mastering and technical editing by Milo Maguire, Simon Monsoor, and Dominic Armstrong. Full transcripts and an extensive collection of links to learn more are available on our site and put together as always by Katie Moore. Thanks for joining.

Talk to you again soon.