#191 - Carl Shulman on the economy and national security after AGI

Primary Topic

This episode delves into the profound implications of affordable, superhuman artificial general intelligence (AGI) on global economy and security.

Episode Summary

In this engaging six-hour discussion, host Rob Wiblin and guest Carl Shulman explore the potential economic and geopolitical consequences of advanced AGI systems capable of performing tasks at a level of efficiency and scale far beyond human capabilities. They cover topics like the accelerated pace of AI development, its impact on international conflict, and the moral considerations of AI as autonomous entities. Shulman, a seasoned researcher with deep insights into AI and existential risks, shares his vision of a future shaped by the omnipresence of AGI in all forms of human endeavor—from economic production to governance, transforming every aspect of society.

Main Takeaways

  1. AGI could revolutionize the economy by replacing human labor almost entirely, leading to explosive growth.
  2. The geopolitical landscape could be reshaped as countries compete or cooperate to harness AGI's capabilities.
  3. Ethical and moral considerations of AGI will become increasingly important as these systems gain autonomy.
  4. The potential for AGI to self-improve could lead to scenarios where human control over these systems becomes tenuous.
  5. Strategies for integrating and regulating AGI will be critical to avoiding destabilizing outcomes.

Episode Chapters

1. Introduction and Background

Rob Wiblin introduces Carl Shulman, who discusses his focus on AGI's implications for future societal structures. Shulman shares insights on the transformative potential of AGI across various sectors. Rob Wiblin: "Carl Shulman stands alone in his capacity to forecast the implications of hypothetical technologies."

2. Economic Transformations

Discussion on how AGI could alter economic structures, potentially making traditional forms of human labor obsolete. Carl Shulman: "AGI could lead to economic outputs that far exceed anything imaginable today by replacing costly human labor with more efficient AI systems."

3. Geopolitical Implications

Exploration of how AGI could shift global power dynamics, affecting everything from military to diplomatic relations. Carl Shulman: "The introduction of AGI could drastically alter geopolitical strategies and global power balances."

4. Ethical Considerations

The moral status of AI and its integration into society are debated, questioning how rights and ethical treatment should extend to non-human intelligences. Carl Shulman: "We need to consider the ethical implications of AI as potential sentient beings."

Actionable Advice

  1. Educate yourself on AI and its potential impacts.
  2. Engage in discussions about the ethical use of AI.
  3. Advocate for policies that manage AI development responsibly.
  4. Prepare for changes in the job market due to AI advancements.
  5. Consider the security implications of AI in your professional field.

About This Episode

The human brain does what it does with a shockingly low energy supply: just 20 watts — a fraction of a cent worth of electricity per hour. What would happen if AI technology merely matched what evolution has already managed, and could accomplish the work of top human professionals given a 20-watt power supply?
Many people sort of consider that hypothetical, but maybe nobody has followed through and considered all the implications as much as Carl Shulman. Behind the scenes, his work has greatly influenced how leaders in artificial general intelligence (AGI) picture the world they're creating.

People

Carl Shulman, Rob Wiblin

Companies

Leave blank.

Books

Leave blank.

Guest Name(s):

Carl Shulman

Content Warnings:

None

Transcript

Carl Shulman
An AI model running on brain like efficiency computers is going to be working all the time, does not sleep. It does not take time off. It does not spend most of its career in education or retirement or leisure. So if you do 8760 hours of the year, 100% employment at $100 per hour, you're getting close to a million dollars of wages equivalent. So if you were to buy that amount of skilled labor today that you would get from these 50,000 human brain equivalents at the high end of today's human wages, you're talking about per human being, the energy budget on earth could sustain more than $50 billion worth at today's prices of skilled cognitive labor.

If you consider the high end, the scarcer, more elite, higher compensated labor, then it's even more.

Rob Wiblin
Hey, listeners, rob Riblin here. In my opinion, in terms of his capacity and willingness to think through how different hypothetical technologies might play out in the real world, Carl Shulman stands alone. Though you might not know that much about him yet, his ideas have been hugely influential in shaping how people in the AI world expect the future to look. And speaking for myself, I don't think anyone else has left a bigger impression on what I picture in my head when I try to imagine the future. The events that he believes are more likely than not are wild, even for someone like me, who is used to entertaining pretty wacky ideas.

Longtime listeners will recall that we interviewed Carl about pandemics and other threats to humanity's future besides AI back in 2021. But here we've got 6 hours on what Carl expects would be the impact of cheap AI that can do everything people can do, and more, something he's been reflecting on for about 20 years. Hour for hour, I feel like I learned more talking to Carl than anyone else I could name. AI researchers expect this hypothetical future of cheap, superhuman AI that recursively self improves to arrive within 15 years and maybe within the next five years. So these are issues society is turning its mind to too slowly, in my view.

We're splitting this episode into two parts to make it more manageable. The first is going to cover AI and the economy, international conflict, and the moral status of AI minds themselves, while the second will cover AI and epistemology, science, culture, and domestic politics. To give you a bit more detail here, in part one, we first dive into truly considering the hypothetical of what would happen if we had AI's that could do everything humans could do with their minds having a similar level of energy efficiency, not just thinking about something nearby that, but concretely thinking through how would the economy function. At that point, what might human lifestyles look like? Fleshing that out takes about an hour.

But at that point, we then go through six objections to the picture Carl paints, including why we don't see growth increasing now, whether any complex system can really grow itself so quickly, whether intelligence is actually that useful, practical physical limits to growth, whether humanity might choose to simply prevent all of this from happening, and the fact that it all just might sound a bit too crazy. Then we think about arguments that economists specifically give for rejecting Carl's vision, including bow mole effects, the lack of robots, policy interference, bottlenecks in transistor manufacturing, and the need for a human touch, whether that is in childcare or in company management. Karl explains in each case why he thinks economists conventional bottom lines on this topic are mistaken, and at times even self contradictory. Finally, through all of that, we've been imagining AI's as though they were just tools without their own interests or moral status. But that may not be the case.

And so we close by discussing the challenges of maintaining an integrated society of both humans and non human intelligences, in which both live good lives and neither is exploited. A few times in this episode, we refer to Carl's last interview, which was on the Dwarkesh podcast in June 2023. And in that one, he talked about how an intelligence explosion happens, the fastest way to build billions of robots, and a concrete, a striking, a concrete step by step account of how an AGI might try to take over the world. That was perhaps my favorite podcast episode of last year, so I could certainly recommend going and checking it out if you like. What you hear here, there's not really a natural ordering of what to listen to first.

These are all kind of just different pieces of the complex, integrated picture, the future that Carl has been developing, which I hope he'll continue to elaborate on in future interviews. And now I bring you Carl Shulman, and what the world would look like if we got cheap, superhuman AGI.

Today I'm speaking with Carl Schulman. Carl studied philosophy at the University of Toronto and Harvard, and then law at NYU. He's an independent researcher who blogs at reflective disequilibrium. And while he keeps a low profile, he's had as much influence on the conversation about existential risks as anyone. And he's also just one of the most broadly knowledgeable people that I'm aware of.

In particular, for the purposes of today's conversation, he spent more time than almost anyone thinking deeply about the dynamics of a transition to a world in which AI models are doing most or all of the work and how the government and economy and ordinary life might look after that transition. So thanks so much for coming back on the podcast, Karl. Thank you, Rob. I'm glad to be back. I hope to talk about what changes in our government structures might be required in a world with superhuman AI and how an intelligence explosion affects geopolitics.

But first, you've spent a lot of time trying to figure out what's the most likely way for the world to transition into a situation in which AI systems are doing almost all the work, possibly all of it, and then also kind of picturing how the economy, what it would look like, and how it might actually be functioning after that transition. Why is that a really important thing to do that you thought it's worth investing a substantial amount of mental energy into? Sure, Rob. So you've had a number of guests on discussing the incredible progress in AI and the potential for that to have transformative impacts. One issue that's pretty interesting is the possibility that humans lose control of our civilization to the AI's that we produce.

Carl Shulman
Another is that geopolitical balances of power are greatly disrupted, that things like deterrence in the international system and military balances are radically changed and just any number of issues. But those are some of the largest. And the amount of time that we have for human input into that transition is significantly affected by how fast these feedback processes are. And characterizing the strength of that acceleration points to to what extent you may have some parts of the world pull away from others. That a small initial difference and say how advanced AI technology is in one alliance of states rather than another translates into huge differences in economic capabilities or military power.

And similarly for controlling AI systems and avoiding a loss of control from human civilization, the faster those capabilities are moving at the time we get to really powerful systems where control problems could become an issue, the more there will be very little opportunity for humans to have input to understand the thing or for policy response to work. And so it matters a lot whether you have transitions from AI's doing accounting for a small portion of economic or scientific activity to the overwhelming majority. If that was 20 years rather than two years, it's going to make a huge difference for our ability to respond. What are some of the near term decisions that we might need to make or states might need to be thinking about over the next five years? This sort of picture might bear on.

Rob Wiblin
What sort of decisions might it bear on in the near future? Sure. Well, some of the most important, I think, are whether to set up the optionality to take regulatory measures later on. So if automation of AI research means that by the time you have systems with like roughly human like capabilities, without some of the glaring weaknesses and gaps that current AI systems have, if at that point, instead of AI software capabilities doubling on a timescale like a year, if that has gone down to six months, three months, one month, then you may have quite a difficult time having a regulatory response. And if you want to do something like, say, setup, hardware tracking, so that governments can be assured about where GPU's are in the world, so that they have the opportunity to regulate if it's necessary, in light of all the evidence they have at the time, that means you have to set up all of the infrastructure and the systems years in advance, let alone the process of political negotiation, movement building, setting up international treaties, working out the kinks of enforcement mechanisms.

Carl Shulman
So if you want the ability to regulate these sorts of things, then it's important to know to what extent will you be able to put it together quickly when you need it, or will it be going so fast that you need to set things up earlier? One of the important decisions that could come up relatively soon, or at least as we begin to head into rapid increases in economic growth, is that different countries or different geopolitical blocs might start to feel very worried about the prospect that you could see very rapid economic or technological advances in another bloc, because they would anticipate that this is going to put them at a major strategic disadvantage. And so this could set up quite an unstable situation in which one block moving ahead with this technological revolution ahead of the other could, I guess, trouble the other side to a sufficient degree that they could regard that almost as a hostile act and that we should think about how are we going to prevent there being conflict over this issue, because one country having an economy that's suddenly ten or 100 times larger than another would potentially just give them such a decisive strategic advantage that this would be highly, that even the prospect of that would be highly destabilizing. Yeah, I think this is one of the biggest sources of challenge in negotiating the development of advanced Aih. So obviously, for the risk of AI takeover, that's something that's not in the interest of any state.

And so to the extent that the problem winds up well understood when it has, it's really becoming live, you might think, okay, well, everyone will just design things to be safe. If they are not yet like that, then companies will be required to meet those standards before deploying things. And yeah, so there will be not much problem there. Everything should be fine. And then the big factor, I think that undermines that is this pressure and fear which we already see in things like chip nationalism.

So there are export controls placed by the US and some of its economic partners on imports of advanced AI chips by a number of countries. You see domestic subsidies in both the US and China for localization of chip industries. And so there's already some amount of politicization of AI development as an international race. And that is in a situation where so far, AI has not meaningfully changed balances of power. It doesn't thus far affect things like the ability of the great powers to deter one another from attacks.

And the magnitude of those effects that I would forecast get a lot larger later on. And so it requires more efforts to have those kinds of tensions tamped down and to get agreements that capture benefits that both sides care about and avoid risks of things they don't want. And so that includes the risk of AI takeover from humans in general. There's also just that if the victor of an AI race is uncertain, the different political blocs would each probably dislike more finding themselves militarily helpless with respect to their powers than they would like to have that position of power with respect to their rivals. And so potentially, there's a lot of room for deals that all parties expect to be better going forward, that avoid extreme concentration of power that could lead to global dominance by either rogue AI or one political bloc, but requires a lot of work.

And making that happen is, I think, more likely to work out if various parties who could have a stake in those things foresee some of these issues, make deals in advance, and then set up the procedures for trust building, verification, enforcement of those deals in advance, rather than a situation where there's, these things are not foreseen late in the game. It becomes broadly perceived that there's a chance for sort of extreme concentration of power and then a mad scramble for it. And I think we should, like, on pluralistic grounds and low hanging fruit gains from trade, to have a situation where there's more agreement, more negotiation about what happens, rather than a mad rush wherever some possibly non human actor winds up with unaccountable power. Okay, so what you were just saying builds on the assumption that we're going to see very rapid increases in the rate of economic growth in countries that deploy AGI. You think that we could see the global economy doubling in much less than a year, rather than every roughly 15 years, as it does today.

Rob Wiblin
That's in part because of this intelligence explosion idea where progress in AI can be turned back on. The problem of making AI better, creating a possibly very powerful positive feedback loop. For many people, though, those sorts of rates of economic growth, well over 100% per year, will sound pretty shocking and require some justification. So I'd like to spend some time now kind of exploring. You think a post AGI economy would look like, and why?

What are the key transformations that you expect we would observe in the economy after an AI capabilities explosion? Well, first your description talked about AI feeding back into AI, and so that's AI capabilities explosion dynamic. That seemed very important in getting things going. But that innovative effort than applies to other technology. And in particular, one critical AI technology is robotics.

Carl Shulman
And robotics is heavily limited now by the lack of smart, efficient robot controllers. And so, as I discussed on the Dwarkesh podcast, with rich robotic controllers and a surfeit of cognitive labor to make industry more efficient, manage human workers and machines, and then make robotic replacements for the human manual labor contributions, you're quickly moving into the physical world and physical things. And really, the economic growth or economic scale implications of AI come from both channels. One greatly expedited innovation by having tremendously more and cheaper cognitive labor, but secondly, by eliminating the human bottleneck on the expansion of physical industry, where right now, as you make more factories, if you have fewer workers per factory and fewer workers per tool, the additional capital goods are less valuable. By moving into a situation where all of those inputs of production can be scaled and accumulated, then you can just have your industrial system just produce more factories, more robots, more machines, and at some regular doubling time, just expand the amount of physical stuff.

And that doubling time can potentially be pretty short. So in the biological world, we see things like cyanobacteria or duckweed lily pads, that can actually double their population using energy harvested from the sun in as little as, I think, 12 hours. In the case of cyanobacteria, in a couple of days. For duckweed, you have fruit flies that over a matter of weeks can increase their population 100 fold. And that includes little biorobotic bodies and compute in the form of their tiny nervous systems.

So it is physically possible to have physical stuff, including computing systems and bodies and manipulators, to double on a very short time scale. Such, you know, you take those doubling rates over a year, that exponential goes to use up the natural resources on the earth, in the solar system. And at that point, you're not limited by the growth rate of labor and capital, but by these other things that are in more fixed supply, like natural resources. Like solar energy. And when we ask, what are those limits?

If you have robotic industry expand to the point where the reason it can't expand more, why you can't build your next robot, your next solar panel, your next factory, is that you have run out of natural resources. So, on Earth, you've run out of space to put the solar panels or the heat dissipation from your power industry is too great. If you kept adding more, it would raise the temperature too much. You're running out of metals and whatnot. That's a very high bar.

And so, right now, human energy consumption is on the scale of ten to the 13 watts. So that is in the thousands of watts per humanity. Solar energy hitting the top of the atmosphere, not all of it gets down, but it's on the vicinity of two times ten to the 17. So thousands of times our current world energy consumption reaches the earth. If you're harvesting five or 10% of that successfully, with very high efficiency solar panels or otherwise coming close to the amount of energy use that can be sustained on the earth, that's enough for a million watts per person.

And a human brain uses 20 watts. A human body uses 100 watts. So, if we consider robotics technology and computer technology that are at least as good as biology, where we have physical examples of, well, this is possible because it's been done. That budget means you could have, per person, an energy budget that can, at any given time, sustain 50,000 human brain equivalents of AI, cognitive labor, 10,000 human scale robots. And then if you consider smaller ones, say, like insect sized robots or small AI models, like current systems, including much smarter small models distilled from the gleanings of large models and with much more advanced algorithms, that's a per person basis.

That's pretty extreme. And then when you consider the cognitive labor being produced by those AI's, it gets more dramatic. So the capabilities of one human brain equivalent worth of. Of compute are going to be set by what the best software in the world is. So you shouldn't think of, well, what are average.

What is average human productivity today? Think about, for a start, for a lower bound, the most skillful and productive humans. And so, in the United States, there are millions of people who earn over $100 per hour in wages. Many of them are in management. Others are in professions and STEM fields, software engineers, lawyers, doctors, and there's even some who earn more than $1,000 an hour.

So, new researchers at OpenAI, high level executives, financiers, an AI model running on brain like efficiency, computers is going to be working all the time. It does not sleep, it does not take time off. It does not spend most of its career in education or retirement or leisure. So if you do 8760 hours of the year, 100% employment at $100 per hour, you're getting close to a million dollars of wages equivalent. So if you were to buy that amount of skilled labor today that you would get from these 50,000 human brain equivalents at the high end of today's human wages, you're talking about per human being, the energy budget on earth could sustain more than $50 billion worth at today's prices of skilled cognitive labor.

If you consider, like the high end, the scarcer, more elite, higher compensated labor, then it's even more if we consider an even larger energy budget of beyond earth. So there's more solar energy and heat dissipation capacity, and the rest of the solar system about 2 billion times as much if that winds up being used, because people keep building solar panels, machines, computers, until you can no longer do it at an affordable enough price and other resources to make it worthwhile, then multiply those numbers before by a million fold, 100 million fold, maybe a billion fold, and that's a lot. If you have 50 trillion human brains worth of AI minds at very, very high productivity, each per human being, or perhaps a mass of robots, like unto trillions upon trillions of human bodies and dispersed in a variety of sizes and systems. This is just, it is a society whose physical and cognitive, industrial and military capabilities are just very, very, very large relative to today. Yeah, so there's a lot there.

Rob Wiblin
Let's unpack that a little bit by bit. So the first thing that we're talking about was the rate of growth and the rate of replication in the economy. Now, currently the global economy grows by about 5% a year. Why can't it grow a whole lot faster than that? Well, one thing is that it would be bottlenecked by the human population, because the human population only grows very gradually.

Currently it's only about 1% a year. So even if we were to put a lot of effort into building more and more physical capital, more and more factories and offices and things like that, eventually the ratio of physical capital to actual to people to use that physical capital would get extremely unreasonable, and that there wouldn't be very much that you could do with all of this capital without the human beings required to operate them usefully. So you're somewhat bottlenecked by the human population here, but in this world, we're imagining humans are no longer performing any functional productive role in the economy. It's all just machines. It's all just factories.

So the human population is no longer a relevant bottleneck. So in terms of how quickly we can expand the economy, we can just ask the question, how long would it take for this entire productive machinery, all of the physical capital in this world, to basically make another copy of itself? Now, eventually, you'll get bottlenecked, I guess, by physical resources, and we might have to think about going off of earth in order to unbottleneck ourselves on natural resources. But if you manage to. Like setting that aside for a minute, if you manage to double all of the productive mechanisms in the economy, all of the factories, all of the mines, all of the brains, then basically you should be able to roughly double output.

So then we've got this question. How quickly could that plausibly happen? And that's a tough question to answer. Presumably, there is some practical limit, given the laws of physics, to suggest a lower bound. You've pointed us to these similar cases where we already have complex sets of interlocking machinery that represent an economy of sorts that grabs resources from the surrounding environment and replicates every part of itself again and again, so long as those resources are abundant.

And that's the case of biology. So we can ask, in ideal conditions, you know, how long does it take for cyanobacteria or fruit flies or lily pad to duplicate every component in their self replicating factories to copy themselves and reproduce? And that, in some cases, takes days or even even less than a day in extreme instances, of more simple organisms. Now, the self replicating machine that is the lily pad may or may not be a perfect analogy for what we're picturing with a machine economy of silicon and metal. How do you end up benchmarking or thinking through how quickly might the entire economy be able to double its productive capacity?

How long would it take to do that reproductive process? Yeah. So, on the Dvorksh podcast, I discussed a few of these benchmarks. So one thing is to ask, just how much does a GPU cost compared to the wages of skilled laborers? And so right now, with enormous, enormous markups, because there is currently, there was a demand shock.

Carl Shulman
Many companies are trying to buy AI chips, and there's amortization of the cost of developing and designing the chip and so forth. So you have a chip like the H 100, which has computational power and flops that I think is close to the human brain, less memory, and there's some complexities related to that. But basically, existing AI systems are adapted to the context of GPU's, where you have more flops, less memory. And so they operate the same model many times on, for example, different data. But you can get a similar result of take 1000 GPU's, that collectively have the memory to fit a very large model.

And then they have this large amount of compute, and then they will run, say, a human sized model, but then evaluate it thousands of times, as often as a human brain would. Anyway, so these chips order $30,000, as we were saying before, skilled workers paid $100 per hour in 300 hours are going to earn enough to pay for another H 100. And so that suggests a very, very short doubling time if you could keep buying GPU's at those prices, or lower prices, when, for example, the cost of the design is amortized over very large production runs. The cost would actually be higher if we were trying to expand our GPU production super fast. And the basic reason is that they're made using a bunch of large pieces of equipment that would normally be operated for a number of years.

So TSMC is the leading fab company in the world. And then in 2022, they had revenue on the order of $70 billion. And their balance sheet shows plant property and equipment of about $100 billion. So if they had to pay for the value of all of those fabs, all of the lithography machines, all of that equipment, out of the revenues of that one year, then they would need to raise prices correspondingly. But we're saying if right now the price of GPU's is so low relative to the wages per hour of a human brain, then you could accommodate a large increase in prices.

You could handle what would otherwise be profligate waste of making these production facilities with an eye to a shorter production period.

Rob Wiblin
Hey listeners, rob here, I'll just quickly define a few things. Car mentioned GPU's, which, as most of you probably know, stands for graphics processing. And that's the kind of computer chips that you mostly use for AI applications today. He mentioned TSMC, which is the world's biggest manufacturer of computer chips, based in Taiwan. In the ecosystem around TSMC, the other famous companies are Nvidia, which designs the cutting edge chips that TSMC then manufactures.

And then there's ASML, which is a dutch company and the only supplier in the world of the lithography machines that can print the most powerful GPU's. Okay, back to the interview. And we can say similar things about robots. They're not as extreme as for computing, but industrial robots that cost on the order of $50,000 to $100,000, given sufficient controller skill. If you have enough robotic software technology that can replace several workers in a factory, and then if we consider vastly improved technology on those robots and better management and operation, then that again suggests that the payback time of robotics, with the sort of technological advancements you'd expect from scaling up the industry by a bunch of orders of magnitude, huge technology improvements, and very smart AI software to control it, again suggests you could get to a payback time that was well under a year, and then for energy.

Carl Shulman
So there are different ways to produce energy, but in the, there's a fairly extensive literature trying to estimate energy payback times of different power technologies. And so this is relevant, for example, in assessing the climate impacts of renewable technology, because you want to ask, if you use fossil fuels initially with carbon emissions to make solar panels, then the solar panels produce carbon free electricity. How long does it take before you get back the energy that was put into it? And for the leading cells, those times are already under a year. And if you go for the ones that have the lowest energy inputs, so thin film cells and whatnot in really good locations, equatorial deserts, that sort of place, yeah, you can get well under a year, more like two thirds of a year, according to various studies.

Now, that gets worse again, if you're trying to expand production really fast, because if I want to double solar panel production next year, that means I have to build all of these factories. And the energy use required to build a factory that's going to make solar panels for five years or ten years is larger than one fifth or one 10th of that energy that we would normally, in the energy payback analysis, they'd divide the energy used to build the factory across all of the solar panels that it's going to produce. Nonetheless, solar panel efficiency and the energy costs of making solar panels has improved enormously. In the fifties, you had some of the first commercial solar panels cost on the order of $1,800 per watt. And today, we're in the vicinity of $1 per watt.

And so how do you expand solar production far beyond where we're at and have radically enhanced innovation? It does not seem much of a stretch to say we get another amount of progress, which is all within physical limits, because we know there are these biological examples and whatnot, to get another order of magnitude or so of the sort that we've gotten over the previous 70 years. And so that suggests we get down to an energy payback time that is well under a year. Even taking into account that you're trying to scale the fab so much, and you adjust production to minimize upfront costs at the expense of duration of the panels, that sort of thing. So, yeah, that's like a one month doubling time out of that on energy looks like something we would get to.

Rob Wiblin
Yeah. So those are some of the factors that cause you to think that plausibly, possibly, we could see the economy doubling every couple of months or something like that. So that was one part of the answer. Another part of the answer is, so, if we try to imagine what should be possible after we've had this enormous takeoff in the quality of our technology, this enormous takeoff in the size of the economy, one thing you can ask as well, broadly speaking, how much energy should we be able to harvest? And then you're getting an estimate by saying, well, how much energy arrives on earth from the sun?

And then, plausibly, I will be able to collect at least 10% of that, and then we'll split it among people. And then how much mental labor should you be able to accomplish using that energy that we're managing to get? And there you're using kind of the benchmark of the human brain, where we know roughly the sort of mental labor that a human brain is able to do under good conditions, and we know that it uses about 20 watts of energy to do that. I guess if you want to say the human body is also somewhat necessary for the brain to function, then you get up to more like 100 watts. Then you can say, well, how many minds on computer chips could we, in principle, support using the energy that we're harvesting using solar panel, if we manage to get our AI systems to have a similar level of efficiency, algorithmic efficiency, and energy efficiency to the human brain, where you can accomplish roughly what a very, very capable, very motivated human can using 20 watts, and you end up with these absurd multiples where you say, well, in principle, then we should be able to have possibly tens of thousands.

I think you were suggesting I didn't do the mental arithmetic there, but you could, in effect, for every person using that energy, you could support the mental labor that would be performed by tens of thousands of lawyers and doctors and so on today. Is that broadly right? Well, more because of working 100% of the time at peak efficiency. And no human has a million years of education. But these AI models would.

Carl Shulman
It's just routine to train AI models on amounts of material that would take millennia for humans to support. And similarly, other kinds of advantages boost AI productivity. So intense motivation to the task adjustment of the relative weighting of the brain towards different areas. For some tasks, you can use very small models that would have one require a thousandth of the computation. For other tasks, you might use models much larger than human brains, which would be able to maybe handle some very complicated tasks.

And combining all of these advantages, you should do a lot better than what you would get if it was just human equivalent laborers. But this is something of a lower bound, we can say, in terms of human brain equivalents of computation. Yes, in theory, could support tens of thousands of times that on earth, and then far more beyond. Okay, so that's sort of the mental labor picture. And I think maybe it's already helping to give people a sense of why it is that this world would be so transformed, so different in terms of its productive capabilities, that a country that went through this transition sooner and was suddenly every person had the equivalent of 10,000 people working for them, doing mental work.

Rob Wiblin
That that actually would provide a strategic, decisive strategic advantage against other blocks that hadn't undergone that transition. That the power imbalance would just be, would just be really wild. What about on the physical side? Would we see similar radical increases in physical productive ability to build buildings and do things like that? Or is there something that's different between the physical side versus the mental labor side?

Carl Shulman
Well, we did already talk about an expansion of global energy use. And similarly for mining, it's possible to expand energy and use improved mining technology to extract materials from lower grade ores. And so, so far in history, that has been able to keep peak oil or peak mineral x concerns from really biting, because it's possible to shift on these other margins. So, yeah, a corresponding expansion of the amount of material stuff and energy use. And then I enormous increases in efficiency and quality of those goods.

So, in the military context. So if you have this expansion of energy and materials, then, okay, you can have a mass of military equipment that is, accordingly, however many orders of magnitude higher, having ultra sophisticated computer systems and guidance. And it can make a large difference. Seeing technological differences of only a few decades in military technology, the effects are pretty dramatic. So in the first Gulf war, coalition forces come in, and the casualty ratio was something absurd, hundreds, a thousand to one.

And a lot of that was because the munitions of the coalition were smart guided and were just reliably hit their targets. And we do that elsewhere. And so just having a tremendous sophistication in guidance, sensor technology and whatnot would suggest huge advantages there. Not being dependent on human operators would mean that military equipment could be much smaller. So there's, if you're going to have, say, 100 billion insect sized drones or mouse sized drones or whatnot, you can't have an individual human operator for each of those.

And if they're going into areas where radio transmission is limited or could be blocked, that's something that, unless they have local autonomy, they can't do. But if you have small systems by the trillions or more such that there are hundreds or thousands of small drones per human on earth, then that means a so they can be a weapon of mass destruction. And so there's been, some of the advocates against autonomous weapons have painted scenarios which are not that implausible about vast numbers of small drones having a larger killing power per dollar than nuclear weapons, and that they disperse to different targets. And then in terms of undermining nuclear deterrence, if the amount of physical equipment has grown by these orders and orders of magnitude, then there can be thousands, tens of thousands of interceptors for, say, each opposing missile. There can be thousands, tens of thousands of very small infiltrator drones that might go behind arrivals lines and then surreptitiously sabotage and locate nuclear weapons in place.

And just the magnitude of difference in materiel and then allowing such small and numerous systems to operate separately and just greatly enhanced technological capabilities. It's one where it really seems that if you had this kind of expansion and then you had another place that was maybe one or two years behind technologically, it might be no contest. Not just no contest in the sense of which is the less horribly destroyed survivor of a war of mutual destruction, but actually fundamentally breaking down deterrence because it's possible to disable the military of arrival without taking significant casualties or imposing them. I suppose if you could just disarm an enemy without even imposing casualties on them, then that might substantially increase the appetite for going ahead with something like that, because people might feel the moral qualms that they would otherwise have might just be absent. There's that.

And then even fewer moral qualms might be attached to the idea of just outgrowing the rival. So if you have an expansion of industrial equipment and whatnot that is sufficiently large, and that then involves seizing natural resources that right now are unclaimed. Because remember, in this world, the limit on the supply of industrial equipment and such that can exist is a natural resource based limit. And right now, most natural resources are not in use. So most of the solar energy, say, that reaches the earth is actually hitting the oceans.

In Antarctica, the claimed territory of sovereign states is actually a minority of the surface of the earth, because the oceans are largely international waters. And then, if you consider beyond Earth, that, again, is not the territory of any state. There is a treaty, the outer space treaty, that says it's the common heritage of all mankind. But if that does not translate into blocking industrial expansion, there you could imagine a state letting loose this robotic machinery that replicates at a very rapid rate. If it doubles twelve times in a year, you have 4096 times as much.

By the time other powers catch up to that robotic technology, if they were, say, a year or so behind, it could be that there are robots loyal to the first mover that are already on all the asteroids on the moon and whatnot. And unless one tried to forcibly dislodge them, which wouldn't really work because of the disparity of industrial equipment, then there could be an indefinite and permanent gap in industrial and military equipment. And that applies even after every state has access to the latest AI technology. Even after the technology gap is closed, a gap in natural resources can remain indefinitely, because right now, well, those sorts of natural resources, they're too expensive to acquire. They have almost no value.

The international system has not allocated them. But in a post AI world, the basis of economic and industrial and military power undergoes this radical shift where it's no longer so much about the human populations and skills and productivity, and in a few cases, things like oil revenues and whatnot. Rather, it's about access to natural resources, which are the bottleneck to the expansion of industry. Okay, so the idea there is that even after this transition, even after everyone has access to a similar level of technology, in principle, one country that was able to get a one year head start on going into space and claiming as many resources as they can, it's possible that the rate of replication up there, the rate of growth, would be so fast that a one year head start would allow you to claim most of it, because other people just couldn't, couldn't catch up in the race of these ever self replicating machines that then go on and claim more and more territory and more and more resources. Is that right?

That's right, yeah. Okay. Something that's crazy intuitively about this perspective where here we're thinking about what sort of physical limits are there on how much useful computation could you do with the energy and the materials in the universe, is that we're finding these enormous multiples between where we're at now and where, in principle, one could be wherever, just on earth, just using something that's about as energy efficient as the human mind. Everyone could have ten to 100,000 amazing assistants helping them, which means that there's just this enormous latent inefficiency in what is currently happening on earth relative to what is physically possible, which to some extent, you would have to hold evolution accountable, saying that evolution has completely failed to take advantage of what the universe permits in terms of energy efficiency and the use of materials. I think one.

Rob Wiblin
One thing that makes the whole thing feel unlikely or intuitively strange is that maybe we're used to situations in which someone, we're closer to the efficient frontier. And the idea that you could just multiply the efficiency of things by 100,000 fold feels strange and foreign. Is it surprising at all that evolution hasn't managed to get closer at all to physical limits of what is possible in terms of useful computation? Yeah. So just numerically, how close was the biosphere to the energy limits of earth that we're talking about?

Carl Shulman
So, net primary productivity is on the order of ten to the 14 watts. So it's a few times higher than our civilization's energy consumption across electricity, heating, transportation, industrial heat. And so why was it a factor of 1000 smaller than solar energy hitting the top of the atmosphere? So one thing is not intercepting stuff high in the atmosphere. Sure.

Secondly, I was just saying that most of the solar is hitting the oceans and otherwise land that we're not inhabiting. And so why is the ocean mostly unpopulated? It's because in order for life to operate, so it needs energy, but it also needs nutrients. And in the ocean, those nutrients, they sink, they're not all at the surface. And so where there are upwellings of nutrients, in fact, you see incredible profusion of life at upwellings and in the near coastal waters.

But most of the ocean is effectively desert. And in the natural world, plants and animals can't really coordinate at large scales, so they're not going to build a pump to suck the nutrients that have settled on the bottom up to the surface. Whereas humans and our civilization organize these large scale things, we invest in technological innovation that pays off at large scales. And so, yeah, if we were going to provide our technology to help the biosphere grow, that can include having nutrients on the surface, so having little floating platforms that would contain the nutrients and allow growth there. It would involve developing the vast desert regions of the earth, which are limited by water.

And so you could have, using the abundant solar energy in the sahara, you can do desalination, bring water in, expand the habitable area, and then when we look even in arable land, so you have nutrients that are not in the right balance for a particular location. You have competition, pests, diseases and such, that reduce productivity below its peak. And then just the actual conversion rate of sun on a square meter. In green plants versus photovoltaics, there's a significant gap. So we have solar panels with efficiencies of tens of percent, and it's possible to make multi junction cells that absorb multiple wavelengths of light.

And the theoretical limit for those is very high. And I think an extreme theoretical limit that involves making other things impractical can go up to something like 77% efficiency by going to 40 or 50% efficiency and then converting that into electricity, which is a very useful form of energy. And the form of energy, we're talking about four things, like computers do very well, and then photosynthesis, you have losses to respiration. You're only getting a portion of the light in the right wavelength and the right angles, et cetera, et cetera. And so, yeah, so most of the potential area is not being harvested a lot of the year.

There's not a plant at every possible site using the energy coming in. And our solar panels can do a bit better. And if we just ignore the solar panels, we could just build nuclear fission power plants to produce an amount of energy that is very large. And the limitation we would run into would just be heat release, that the world's temperature is a function of the energy coming in and going out. Infrared, the infrared increases with temperature.

And so if we, if we put too many nuclear power plants on the earth, eventually, you know, the oceans would boil and, you know, that's not a thing we would want to do. But, yeah, these are pretty, pretty clear ways in which nature was not able to fully exploit things. Now, we might have true also not to exploit some of that resource once it becomes economical to do so. And if you imagine in a future where society is very rich, if people want to maintain the dead, empty oceans and not filled with floating solar platforms, they can do that. Outsource industries, say, to space, solar power.

If you're going to have a compute or energy intensive industry that makes information, goods that don't need to be co located with people on Earth, then sure, get them off Earth. Protect nature. There's not much nature to disrupt in the empty void. And so you could have those sorts of shifts. Yeah.

Rob Wiblin
What do you imagine people would be spending their money on in a world in which they have access to the kinds of resources that today would cost tens or hundreds of millions of dollars a year in terms of intellectual labor. How would people choose to spend this surplus? Yeah, well, we should remember. So some things are getting much cheaper relative to others. So if you increase the availability of energy by 100 fold or 1000 fold, but then we increase the availability of cognitive labor by millions of times or more, then the relative price of, say, lawyer time or doctor time or therapist time, compared to the price of a piece of toast that has to plummet by orders of magnitude, tens of thousands of times, hundreds of thousands of times, and more.

Carl Shulman
When we ask, what are people spending money on? It's going to be enriched for the things that scale up the least. But even those things that scale up the least seem like they're scaling up quite a lot, which is a reason why I expect this to be quite transformative. So what are people spending money on? We can look today at how people's consumption changes as they get richer.

And so one thing they spend a lot on, or even more on how they get richer, is housing. Another one is medicine. So medicine is very much a luxury good in the sense that as people and countries get richer, they spend a larger and larger proportion of their income on medical care. And then we can say the same things about, say, the pharmaceutical industry, the medical device industry. So the development of medical technology that is then sold, and there are similar things in the space of safety.

So, like, government expenditures may have a tendency to grow with the economy and with what the government can get away with taking. If military competition were a concern, then building the industrial base for that, like we were saying, could account for some significant chunk of industrial activity, at least. And then fundamentally, things that involve human beings are not going to get, again, overwhelmingly cheap. So more energy, more food can support more people and conceivably support, over time, human populations that are 1000, a million, a billion times as great as today. But if you have exponential population growth over a long enough time, that can use up any finite amount of resources.

And so we're talking about a situation where AI and robotics undergoes that exponential growth much faster than humans. And so initially, there's an extraordinary amount of that industrial base per human. But if some people keep having enough kids to replace themselves, if lifespans and health spans extend, IVF technology improves, and you wind up with some fertility rate above replacement. Robot nannies and such could help with that as well. Then over 1000 years, 10,000 years, 100,000 years, eventually human populations could become large enough to put a dent in these kinds of resources.

It's not a short term concern unless, say, people use those AI nannies and artificial wombs to create a billion kids raised by robots, which would be sort of a weird thing to do. But I believe there was a family in Russia that had dozens of kids using surrogates. And so you could imagine some people trying that. Okay, so you've just laid out a picture of the world and the economy there that if people haven't heard of this general idea before, they might be somewhat taken aback by these expectations. Just to clarify, what do you think is the probability that we go through a transition that, broadly speaking looks like what you've described within the next, or that the transition begins in a pretty clear way within the next 20 years?

Yeah. So I think that's more likely than not. I'm abstracting over uncertainties about exactly how fast the AI feedbacks go. So it's possible that just software only feedbacks are sufficiently intense to drive an explosion of capabilities. That is, things that don't involve building enormous numbers of additional computers can give you the juice to increase the effective abilities of AI's by a few orders of magnitude.

Several orders of magnitude. It's possible that as you're going along, you need the combination of hardware expansion and software. Eventually you'll need a combination of hardware and software, or just hardware to continue the expansion. But exactly how intense the software only feedback loop is at the start is one source of uncertainty. But because you can make progress on both software and hardware by improving hardware technology and by building additional fabs or some successor technology, the idea that there is a quite rapid period of growth on the way in seems something that I'm relatively confident on.

And in particular the idea that eventually that also leads to improvements in the throughput of automated industrial technology. So that you have a period of this. What's analogous to biological population growth, where a self replicating industrial system grows rapidly to catch up to natural resource bounds, I think that's quite likely. And that aspect of it could happen even if we wind up, say, with AI taking over our civilization, they might do the same thing, although I expect probably there will be human decisions about where we're going. And, well, there's a serious risk of AI takeover, as I discussed with Dorkish, it's not my median outcome.

Rob Wiblin
Yeah. Okay, so quite likely, or more likely than not, I think you have a reasonable level of confidence in this broad picture. So later on we're going to go through some objections that economists have to this story and why they're kind of skeptical that things are going to play out in such an extreme way as this. But maybe now I'll just go through some of the things that give me pause and make me wonder, is this really going to happen? One of the first ones that occurs to me is you might expect an economic transformation like this to happen in a somewhat gradual or continuous way, where in the lead up to this happening, you would see economic growth rates increasing.

So you might expect that if we're going to see a massive transformation in the economy because of AGI in 2030 or 2040, shouldn't we be seeing economic growth rates today increasing? And shouldn't we maybe have been seeing them increase for decades as information technology has been advancing and as we've been gradually getting closer, closer to this time? But in reality, it seems like over the last 50 years, economic growth rates have been kind of flat or declining. Is that in tension with your story? Is there a way of reconciling?

Why is that? Things might seem a little bit boring now, but then we should expect radical changes within our lifetimes. Yeah, so you're pointing to an important thing. So when we double the population of humans in a place cetrus paribus, we expect the economic output after there's time for capital adjustments to double or more. And so a place like Japan, not very much in the way of natural resources per person, but has a lot of people, economies of scale, advanced technology, high productivity, and can generate enormous wealth.

Carl Shulman
And some places have population densities that are hundreds or thousands of times that of other countries, and a lot of those places are extremely wealthy per capita. So by the example of humans, doubling, the human labor force really can double or more economic output after capital adjustment for computers, that's not the case. And a lot of this reflects the fact that thus far, computers have been able to do only a small portion of the tasks in the economy. So very early on in the history of computers, they got better than humans at serial, reliable arithmetic calculations, which you could do with an incredibly small amount of computation compared to the human brain, just because we're really badly set up for multiplying and dividing lots of numbers. And there used to be a job of being a human computer, and I think that there are films about them, and it was a thing, those jobs have gone away because they're just the difference now in performance.

You can get the work of millions upon millions of those human computers for basically peanuts. But even though we now use billions of times as much in the way of that sort of calculation, it doesn't mean that we get to produce a billion times the wages that were being paid to the human computers at that time, because there were diminishing returns and having more and more arithmetic calculations while other things didn't keep up. And when we double the human population and capital adjusts, then you're improving things on all of these fronts. So it's not that you're getting a ton of enhancement of one kind of input, but it's missing all of the other things that it needs to work with. And so, as we see progress towards AI that can robustly replace humans, we should expect the share of tasks that computing can do to go up over time, and therefore the increase in revenue to the computer industry, or an economic value additive from computers per doubling of the amount of compute to go way up.

So, historically, it's been more like you double the amount of compute, and then you get maybe one fifth of a doubling of the revenue of the computer industry. And so if we think success at broad automation, human substituting AI, is possible, then we expect that to go up over time from a fifth to one or beyond. And then if you ask, well, why would this be? One thing that can help make sense of that is ask, well, how much compute have the computing industry been providing historically? And so I said that, no, maybe an h 100 that costs tens of thousands of dollars can give computation comparable to the human brain.

But that's after many, many, many years of Moore's law, during which the amount of computation you could buy per dollar has gone up by billions of times and more. So when you say, okay, right now, if we add 10 million h world each year, then maybe we increase the computation in the world from 8 billion human brains worth to 8,010,000,000 human brains.

You're starting to make a difference in total computation. But it's pretty small. It's pretty small. And so it's only where you're getting a lot more out of it per computation that you see any economic effect at all. And going back further, you're talking about, well, why wasn't it the case that having twice as many of these computer brains, analogous to the brain of an ant or a flukeworm, why wasn't that doubling the economy?

And when you look at it like that, it doesn't really seem surprising at all. Okay, so, yeah, it's understandable that having lots of calculators didn't cause a massive economic revolution, because at that stage, we only had thinking machines that could do an extremely narrow range of all of the things that happen in the economy. And the idea here is that we're heading towards thinking machines being able to do 0.1% of the kinds of tasks that humanity can do, towards being able to do 100%. And then, I guess, more than 100% when they're able to do things that no human is able to do. But wouldn't you?

Rob Wiblin
So where would you say we are now in terms of going from 0.1% to 100%? You might think that if we're, you know, if we're at 50% now, then shouldn't we be seeing economic growth pick up a little bit? Because, you know, these machines, although they can't do everything, and humans still remain a bottleneck on some things, that where we can't find machine substitutes, you still might think that there'll be some substantial pickup. But maybe you're just saying that the chips have only recently gotten to the point where they're able to compete with the human brain in terms of the number of calculations they can do. And even just a couple of years ago, on a few cycles of chip fabs and Moore's law back, they were still only.

All of the computational ability of all of the chips in the world was still only one or 10% of the computational ability of the human brains that are out there. So they just weren't able to pack that much of a punch, because there simply wasn't enough computational ability on all of the chips to make a meaningful difference. Yeah, I'd say that. But also, the software efficiency was worse. And so in recent years, you've had things like for image recognition or LLMs getting similar performance with 100 times less computation.

Carl Shulman
And there's still a lot of room to improve the efficiency of software towards matching the human brain. Now, that progress has been easier lately because with enough computation, more things work. And because the AI industry is becoming so much more effective, resources, including human research effort, has been flowing into it much faster. And then all these things combined have given you this greatly accelerated software progress. And so it's the combination spending more of GDP on compute, the hardware getting better such that you could get some of these interesting results that you've seen recently at all.

And then a huge pickup in the pace of algorithmic progress enabled by all of those additional compute and human resources flowing into the field. Okay, a different line of skeptical argument here. So, in terms of the replication time of all of the equipment in the economy as a whole, at the point when humans are no longer really of it, you mentioned that. So we've got this kind of benchmark of cyanobacteria that manage to replicate themselves in ideal conditions, in less than a day. And then we've got these very simple plants that grow and manage to double in size every couple of days.

Rob Wiblin
And then I guess you've got insects that maybe can double themselves in a week or something, and then small mammals like mice, I guess, I don't know what their doubly time is, but probably a couple of months perhaps, if they're breeding very quickly. And then you've got humans where I think our population growth rate is only about 4% a year or something under really good conditions, when people are really trying, it seems like the more complicated the organism, the bigger the organism, the slower that doubling time, at least in nature, seems to be. And I wonder whether that suggests that this very complicated infrastructure that we would have in this economy as a whole, producing all of these very complicated goods like computer chips, maybe the doubling time there could be in the period of years rather than months, because there's just something about the complexity of having so many different kinds of materials that makes it slower for that replication process to play out. Yeah. So that is a real trend that you're pointing to now.

Carl Shulman
A big part of that in nature relates to the economics of providing energy, materials to fuel growth. And you can see some of that, for example, in agriculture. So in the presence of hyper abundant food, breeders have made chickens that grow to absolutely enormous size compared to nature in a matter of weeks.

That is what would normally be a baby chicken reaches a size that is massive relative to a wild adult chicken in six weeks. And in the wild, that's not going to work. The chicken has to be moving around, collecting food. They get a narrow energy profit from all of the movements required to find and consume and utilize the food. And so just the ecological niche of grow at full speed is not accessible to these large organisms largely.

And for humans, you have that problem, and then in addition, you have the problem of learning and training. So a human develops the skills that they have as an adult by running their human sized brain for years of education, training, exploration and learning. Whereas with AI, we train across many thousands of GPU's more going forward at the same time in order to learn more rapidly. And then the trained, learned mind is then just digitally copied in full. So there's no need to repeat that learning process for each and every computer that we construct.

And that's just like, that's a fundamental structural difference between AI minds and biology. Yeah. So I guess it might make you wonder with human beings, given that it's this training process for children to become capable of acting as human adults. Given how costly it is, why didn't humans have much longer lives? Why don't we live for hundreds of years so we can harvest the benefits that come from all of that learning?

Rob Wiblin
And I guess there you're just running into other constraints. Like, you get predated on, or there's a drought and then you starve. So there's all these external things that are. I meaning that evolution doesn't want to invest in doing all of the repair work necessary to keep human beings alive for an extremely long time, because chances are that they'll be killed by some external threat in the intervening time. Malaria more than leopards, maybe.

Carl Shulman
But, yeah, that's an important dynamic. And just when you think that you could be spending energy on reproducing, if you apply your calories to running a brain to learn more, when you could instead be having some children with that, it's more challenging to make those economics work out. Okay, yeah. Another line of skepticism that I hear that I'm not quite sure what to make of, is this idea that, sure, we might see big increases in the size of these neural networks and big increases in the amount of effective lifespan or amount of training time that they're getting. So, effectively, they would be much more intelligent in principle, in terms of just the specifications of the brains that we're training.

Rob Wiblin
But you'll see massively declining returns to this increasing intelligence or this increasing brain size or this increasing level of training. And maybe one way of thinking about that would be imagine that we were designing AI systems to do forecasting into the future. Now, forecasting things tens or hundreds of years into the future is kind of notoriously very challenging, and human beings, nothing very good at it. Now, you might expect that a brain that's 100 times the size of the human brain and has much more compute and has been trained on all of the knowledge that humans have ever collected because it's had millions of years of life expectancy. Perhaps it could do a much better job of that.

But how much better a job could it really do, given just how chaotic events in the real world are? Maybe being really intelligent just doesn't actually buy you the ability to do some of these amazing things. And you do to see substantially declining returns as brains become more capable than humans are. And this would just tamp down on this entire dynamic. It would tamp down on the speed of the feedback loop from AI advances to more AI advances.

It would tamp down on what these extremely capable AI advisors, how useful their engineering advice was how much they'd be able to help us speed up the economy. What do you make of this declining returns line that people sometimes raise? Well, actually, from the arguments that we've discussed so far, I haven't even really availed myself of much that would be impacted by that. So I'll take weather forecasting so you can expand exponentially more computing power to go incrementally a few more days into the future for local weather prediction. At the level, will there be a storm on this day rather than that day?

Carl Shulman
And yeah, if we scale up our economy by a trillion fold, maybe we can go add an extra week or so to that sort of short term weather prediction. It's a chaotic system, but that's not impacting any of the dynamics that we talked about before. It's not impacting the dynamic where, say, Japan, with a population many times larger than Singapore, can have a much larger gdp just duplicating and expanding. These same sorts of processes that we're already seeing give you corresponding expansion of economic, industrial, military output. And we have again, the limits of just observing the upper peaks of human potential and then taking even quite narrow extrapolations, just looking at how things vary among humans, say, with differing amounts of education.

And when you go from some high school education to a university degree, graduate degree, you can see like a doubling and then a quadrupling of wages. And if you go to a million years of education, surely you're not going to see 10,000 or 100,000 times the wages from that. But getting four x or eight x or 16 x off of your typical graduate degree holder seems plausible enough. And we see a lot of data in cases where we can do experiments and see in things like go or chess, where we've looked out to sort of superhuman levels of performance, and we can say, yeah, there's room to gain some. And where you can substitute a bigger, smarter, better trained model, evaluated fewer times, for using a small model evaluated many times.

But by and large, yeah, this argument go through largely just assuming you can get models to the upper bounds of human capacity that we know is possible, and the duplication argument really is unaffected by the sort of that, yes, weather prediction is something where you'll not get a million times better, but you can make a million times as many physical machines, process correspondingly more energy, etcetera. So if I understand, what were you saying? So I guess maybe I'm reading into this scenario. I'm imagining that these AI systems that are doing this mental labor, not only are they very numerous, but also, hopefully they're much more insightful than human beings are. Hopefully they've exceeded human capabilities in many ways.

Rob Wiblin
But we can kind of set a minimum threshold and say, well, at least they should be able to match human performance in a bunch of these areas, and then we could just have a lot of them that gives us one minimum threshold. And you think that most of what you're describing could be justified just on that sort of grounds, without necessarily having to speculate about exactly where will they cap out in terms of their ability to have amazing insights in science, we can get enormous transformation just through sheer force of numbers. That's right. And things like having 100% labor force participation, intense motivation, and then the additional larger model size, having a million years of education, those things will give further productivity increases. But, yeah, this basic argument doesn't require that.

Okay. I think another reason that people might be a bit skeptical that this is going to play out is just looking at the level of physical transformation of the environment that this would require. We're talking here about capturing 10% of all of the solar energy hitting the world. This seems like it would. This would require a massive increase in the number of solar panels in principle, or maybe a massive increase in the number of nuclear power plants.

I think for the kinds of economic doublings that you're talking about, at some point we would be capping out at building thousands of nuclear power plants every couple of months. And currently it seems like globally we struggle to manage a dozen a year. I don't know what the exact numbers are, but there's something that is a bit surprising about the idea that we're currently restricting ourselves so enormously in how much we use the environment and where we are willing to put buildings, where we're willing to put nuclear power plants, or whether we're willing to have them at all. The idea that within our lifetimes, we could see rates of construction go up 100 or 1000 fold in the physical environment, even if we had robots capable of building them. It feels, I think, understandably counterintuitive to many people.

Do you want to comment on that? Yeah, very first thing to say is that has already happened relative to our ancestors. So there was a time when there were about 10 million humans, or relevant hominids hanging around on the earth, and they had a very small, you know, they had their stone hand axes and whatnot, but very little, little stuff. Today there's 8 billion humans with a really enormous amount of stuff being produced. And so if you just say, well, a thousand sounds like a lot.

Carl Shulman
Well, every, every numerical measure of the physical production of stuff in our society is like that compared to the past. And on a per capita basis. Does it sound crazy that you have, when you have power plants that support the energy for 10,000 people, does it sound crazy that you build one of those per 10,000 people over some period of time? And it's no, because the efforts to create them are also scaling up. So I'll say these pure how can you have a larger number if you have a larger population of robot workers and machines and whatnot?

I think that's not something we should be super suspicious of. There's a different kind of thing, which is drawing from how in developed countries, there has been a tendency to restrict the building of homes, of factories, of power plants. This is a significant cost. You see, in some very restrictive cities, like New York City, San Francisco, the price of housing rises by several times compared to the cost of constructing it because of basically legal bans on local building. And people, especially folk who are immersed in the sort of Yimby versus Nimby debates and think about all the economic losses from this.

That's very, very front of mind. I don't think this is a reason for me not to expect explosive construction of physical stuff in this scenario, though, and I'll explain why. So even today we see in places like China and Dubai, cities thrown up at incredible rates. There are places where intense construction can be allowed, and there's more of that construction when the payouts are much higher. And so when permitting building can result in additional revenue that is huge compared to the local government, then they may actually go really out of their way to provide the regulatory situation that will attract investments a of an international company.

And in the scenarios that we're talking about, yes, enormous industrial output can be created relatively quickly in a location that chooses to become a regulatory haven. So the United Arab Emirates built up Dubai, Abu Dhabi, and had been trying to expand this non oil economy by just creating a place for it to happen and providing a favorable environment. And in a situation where you have, say, the United States is holding back from having million dollar per capita incomes or $10 million per capita incomes by not allowing this construction. And then the UAE can allow that construction locally and 100 x their income, then I think they go ahead and do it. Seeing that sort of thing, I'd also expect encourages change in the more restrictive regulatory regimes.

And then AI and such can help on the front of governance. So unlimited cheap lawyers makes it easier to navigate horrible paperwork and unlimited sophisticated AI to serve as bureaucrats, advisers to politicians, advisors to voters, makes it easier to adjust to those things. But I think the. Yeah, the central argument is that some places providing the regulatory space from it can make absolutely enormous profits, potentially gain military dominance. And those are strong pressures to make way for some of this construction to enable it and even within the scope of existing places that will allow you to make things.

That goes very far. Yeah. Okay. So the arguments there are, one is just that the level of gain that people will perceive from going ahead with this transformation would be so enormous, like so much larger than what the gain that they perceive from, you know, allowing more apartment construction in their city, that there will be this big public pressure because people will be able to foresee, maybe by watching other countries like the UAE or Qatar or the example of cities that have decided to go for it, that their income could be ten or 100 times larger within their lifetime and they'll be, and not really want that. And then also at the level of states, there'll be competitive factors that will cause countries to want to not hold back for long periods of time because they'll perceive themselves as falling behind radically and just being at a big strategic disadvantage.

Rob Wiblin
And of course, there's all of the benefits of AI helping to overcome the barriers that there currently are to construction and potentially improving governance in all kinds of ways that I think we're going to talk about later. Is that the basic summary? That's right. And just these factors are pretty powerful disanalogies to the examples people commonly give of technologies that have been strangled by regulatory hostility. Yeah, maybe we could give the example of talk through the comparison with nuclear energy, say.

Carl Shulman
Yeah. So nuclear energy theoretically has the potential to be pretty cheap compared to other sources of energy. It can be largely carbon free and it's much safer than fossil fuels. So the number of deaths from pollution from coal and natural gas and whatnot is very large. Every year, enormous numbers of people die from that pollution, even just the local air pollution effects, not including the global climate change effects.

And regulatory regimes have generally imposed safety requirements for technology that already has been much safer than fossil fuels. That basically raise costs to a level that have largely made it non competitive in most jurisdictions. And even places that have allowed it have often removed it later. So Germany and Japan both went on anti nuclear benders in response to local ideological pressures or overreaction to the Fukushima, which directly didn't actually cause as much harm as your sort of typical coal plant does year on year. But the overreaction to it actually causes an enormous amount of damage, and then it's further creating air pollution fatalities, climate change, yada, yada.

So this is an example where nuclear had the potential to add a lot of value. And you see that in France, where they get a very large share of their electricity from nuclear at low cost. And so if other countries had adopted that, they could have had incrementally cheaper electricity and less deaths from air pollution. But those benefits are not actually huge at the scale of local economic activity or of the fate of a state. So when France builds that nuclear power plant infrastructure, it can't then provide electricity for the entire world.

So the export infrastructure for that does not exist, and it couldn't provide electricity, say, an order of magnitude cheaper than fossil fuels, and then ship it everywhere in the form of hydrogen or producing liquid fuels, things like that. And so, yeah, in that situation, having some regulatory havens that are a minority of the world doesn't let you capture most of the potential benefits of the technology. Whereas with this AI robotic economy, if some regions do do it, and then start developing at first locally, and then in trading partners, and then in the oceans, in space, et cetera, then they can realize the full magnitude of the impact. And then secondly, yeah, no country winds up militarily helpless, losing the cold war because they didn't build enough nuclear power plants for civilian power. Now, on the other hand, nuclear weapons were something that the great powers and those without nuclear protective alliances all did go for, because there was no close alternative that could provide capabilities at that level.

And the geostrategic demand was very large. So all these major powers either developed nuclear weapons themselves or relied on alliances with nuclear powers. And so AI and automated economy have some of the geostrategic demand of nuclear weapons, but also an economic impact that is far, far greater than nuclear power could have provided. And I could make similar arguments with respect to, say, GMO crops. And again, one regulatory haven can't realize the full impact of the technology for the world, and the magnitude of the incentives for political decision makers are so much weaker.

Rob Wiblin
Yeah. Okay, let me hit you with a different angle. So imagine that we go into this transformation where economic growth rates are radically taking off and we're seeing the economy double every couple of months, a couple of doubling cycles in, people would look around and say, holy shit, my income is ten to 100 times higher than it was we know just a couple of years ago. This is incredible. But at the same time, they would look around and say, the world is trans.

Like every couple of months, the world is transformed. We've got these insane new products coming online. We've got these insane advances in science and technology. The world feels incredibly unstable because the transformation is happening so incredibly rapidly. And now I've got even more to lose because I'm so, because I feel so rich and I feel so positive about how the future might go if things, if things go well.

And furthermore, probably as part of that technological advance, you might see a very big increase in the ability of people to make agreements and to monitor one another for whether they're following these agreements. And so it might be more practical in this, you know, at this halfway stage for countries to make agreements with one another where they opt to slow down this transition and like, basically sacrifice some income in order to get more safety by making the transition a bit slower, a bit more gradual, so they can evaluate the risks and reduce them. And of course, as people get richer, as you mentioned earlier, they become kind of more concerned with safety. Safety is something of a luxury good that people want more of as they get richer. So we might expect an increased demand for safety and security as this transition picks up.

And that could actually then create a policy change that slows things down again. Do you think that's a plausible story? So certainly the Max speed AI robotics capability economic explosion is one that gets wild relative to the timescale of human affairs for humans to process and understand and think about this for, say, political negotiations to happen. I mean, consider the madness of fixed election cycles on a time scale of four or five years. So it would be like as though you had one election cycle for the industrial revolution.

Carl Shulman
So some british prime minister is elected in 1800, and they're still in charge today because the electoral cycle hasn't come around yet. And, yeah, I mean, that's absurd in many ways. And I, as we were talking about earlier, the risks of accidental trouble, things like a rogue AI takeover, things like instability in this rapid industrial growth, affecting political balances of power. That's a concern. The development of numerous powerful new technologies.

Some of them may pose big additional issues. Say, if this advancing technology makes bioweapons very effective for a period of time before expansions of defenses make those weapons moot, then that could be an issue that arises and arises super fast with this very fast growth. And you might wish that you had more ability to slow down a bit to manage some of those issues rather than going at the literal max speed, which is even if you're very pro progress, very pro fast growth, you might think that you could be okay with, say, doubling the economy every year instead of every month, and having, say, technological progress that gets us what would otherwise be every decade in a year or in six months rather than in one month. The problem is that even if you want that for safety reasons, you have to solve these coordination and cooperation problems, because the same sorts of safety motivations would be used by those saying, think how scary it would be if other places are going as fast as the fastest region where this argument is being made. And so you've got to manage that kind of issue.

And so I have reasonable hope that you would not wind up going at the literal max speed, where that has terrible trade offs in terms of reduced ability to navigate and manage this transition. I have doubts about that. Wildly restricting the growth, if it comes to a point where, say, the general voting public thinks and knows that, say, diseases that are killing people on an ongoing basis could be cured very quickly by continuing this scientific industrial expansion for a bit, I think that would create demand. The most powerful one, though, seems like this military one. And so if the great powers can agree on things to limit the fear of that sort of explosive growth of geopolitical military advantage, then I think you could see a significant slowdown.

But note that this is a very different regulatory situation than, say, nuclear power, where individual jurisdictions may restrict or over regulate or ban it. It doesn't require a global agreement of all the great powers to hold back nuclear power and GMO. And in any case, we do have civilian nuclear power. There are many such plants. Many fields are planted with GMO crops.

And so it's a different level. And it may be met because the importance of the issue might mean there's greater demand for that sort of regulation. And so it could happen. But I think people making a naive inference from regulatory barriers to other technologies need to wrestle with how extreme the scope of international cooperation and the intensity of that regulation, the degree to which it would be holding back capability that could otherwise be had. And if you want to argue the chances of that sort of regulatory slowdown are 70% or 30% or 10% or 90%, happy to have that argument.

But this idea that, oh, nimby tendencies in construction in some dense, progressive cities, in rich countries tell you that basically the equivalent of the industrial revolution packed into a very short time is going to be foregone by state. You need to meet a higher burden. Okay. A different reason that some listeners might have for doubting that this is how things are going to play out. Maybe not an objection to any kind of specific argument or objection to some technological question but just the idea that this is a very cool story, but it sounds completely whack.

Rob Wiblin
And you might reasonably expect the future to be more boring and less surprising and less weird than this? You've mentioned already kind of one response that someone could have to this, which is that, well, the present would look completely whack and insane to someone who was brought forward from 500 years ago. So we've already seen a crazy transformation through the industrial revolution that would have been extremely surprising to many people who existed before the industrial revolution. And I guess plausibly to hunter gatherers, the states of ancient Egypt would look pretty remarkable in terms of the scale of the agriculture, the scale of the government, the sheer number of people and the density and so on. We can imagine that the agricultural revolution shifted things in a way that was quite remarkable and very different than what came before.

Is there any other kind of response, like overall response, that someone give to a listener who's skeptical on this grounds that this is just too weird to be likely? Yeah. So building on some of the things you mentioned, so not only does our post industrial society, is it incredibly rich, incredibly populous, incredibly dense, long lived and different in many other ways from the days of millions of hunter gatherers on the earth, also, the rate of change is much higher. Things that might previously have been a thousand year timescale now happen on the scale of a couple of decades for, say, a doubling of global economic output. And so there's a history, both of things becoming very different, but also of the rate of change getting a lot faster.

Carl Shulman
And I know you've had Tom Davidson, David Rubman and others, and some people with critical views and Ian Morris discussing this. And so cosmologists among physicists who have the big picture, actually tend to think more about these kinds of cases. The historians who study big history, global history, over very long stretches of time tend to notice this. And so, yeah, when you zoom out to the macro scale of history, this is in some ways quite precedented to have these kinds of changes. And actually it would be surprising to say, and this is the end of the line, no further.

Even when we have the example of biological systems that show the ceilings of performance are much higher than where we're at, both for replication times, for computing capabilities and other sort of object level abilities. And then you have these very strong arguments from all our models and accounts of growth that can really explain some of why you had the past patterns and past accelerations. They tend to indicate the same thing. And just the, I mean, consider just the magnitude of the hammer that is being applied to the situation. It's going from millions of scientists and engineers and entrepreneurs to billions and trillions.

On the compute and AI software side, it's a very large change. You should also be surprised if such a large LR change doesn't affect other macroscopic variables in the way that, say, the introduction of hominids has radically changed the biosphere, the industrial revolution greatly changed human society, and so on and so forth. It just occurred to me another way of thinking about the size of the hammer, which is maybe a little bit easier to imagine in the world as it is right now, which is that we're imagining that we're able to replicate what the human mind can do with about 20 watts of energy, because we're going to find sufficiently good algorithms and training mechanisms, and have sufficiently good compute to run that on an enormous scale. And so you'd be able to get the work of a human expert for about 20 watts of electricity, which costs less than one cent to run per hour. So you're getting skilled labor for this radically reduced price.

Rob Wiblin
And you imagine what if suddenly we could get computers to do the work of a, of all of our most skilled professionals for $0.01 an hour worth of electricity. And I guess you need to throw in the compute construction as well. But I think that helps to indicate, just like imagine the transformation that would happen if you could do that without any limit on the number of these computers that you could run as you scaled them up. Does that sound like a useful mental switch to do? That's one thing.

Carl Shulman
Another thing in this space of historical examples and precedents, and sort of consider a larger universe of analogies and examples. So we see fairly often some part of the world where there's an overhang of demand, where the world would like to buy much more of a certain product than exists right away, that you see super rapid expansion. So in software, that's especially obviously so a chat GPT, if that can quickly go to enormous numbers of users, because people already have phones and computers with which to interface with it when people develop a new crop. So maize corn can produce in the hundreds, you can get from one seed, hundreds of seeds after one growing season, do a few growing seasons. And so if you have a new breed of maize, you can scale it up very quickly over the course of a year, to have all the maize in the world be using this new breed, if you're wanting it in the space of startups making so not just software, but physical objects, seeing 30% or even 50% growth is something you see in a lot of the world's largest companies, which is how they're able to become the world's largest companies from an initial startup position without taking centuries to go.

A company like Tesla or Amazon, if it's able to grow 30% or 50% per year while having to hire and train people in all of the skills and expertise related to its business, which is a thing that would be largely circumvented with AI, really suggests yes. If there is demand, if there's a profit to pay for these kinds of rapid expansions, they can go very rapidly. Wartime mobilization would be another. The scale at which us military industry developed in World War Two. Pretty incredible.

Rob Wiblin
Yeah. I'm not sure how persuasive I find that analogy to really rapidly growing companies. I feel a bit confused about it, because I guess you can point to very rapidly companies that have more than double their headcount and more than double their output every couple of months. But I guess in that case, they're able to just absorb this latent human resources. All of these people who are trained to do things that are nearby to what the company wants from outside, and they can absorb all of these resources from the broader economy.

It does show that you can have these self organizing, basically with these organizations that can absorb resources and put them to productive use very quickly. Figure out how to structure themselves in order to do that. But it's a bit less obvious to me that that extends to thinking that you could have this entire system reproduce itself if they had to kind of build all of the equipment from scratch, and they couldn't absorb it from other companies that are not as productive, or grab it from people that have just left university and things like that. Am I thinking about this wrong? Yeah.

Carl Shulman
So we're asking here are all the inputs that go into these production processes, which ones can double how fast? So the skills and people, these are ones that we know can grow that fast. So compute has grown incredibly fast historically to a point of million fold growth over a few decades. And that's even without these strong positive feedback dynamics. And we know that you can copy software just like that.

So expanding the sort of the skills associated with those companies and hiring, that's not going to be the bottleneck. If you're going to have a bottleneck, it's got to be something about our physical machines. So machine tools that you've got to run those machine tools for, say, it takes more than a year of their output to make a similar mass of machine tools. And this is the analysis we were going into earlier with what's the energy payback time of solar panels or power plants? And you do a similar analysis about physical machines.

And as we said there, those numbers look pretty good, pretty close, and then add in technological improvements to take, say, energy payback time that are already below a year, take them down further towards a month. Yeah, things look reasonably compelling there. And looking into the details of, so why don't say companies making semiconductor fabs, vaccines, lithography equipment, why don't they expand faster? A thing that persistently recurs is expanding really fast means making large upfront investments. And if you're not confident that the demand is there and that you're going to make enormous profits to pay back on those investments, then you're reluctant to do it.

And so TSMC is a survivor of many, many semiconductor firms going bust, because when the boom and bust of chip production goes during the bust, companies that have, yeah, that have overinvested, over invested, they can then die. And so it's important on both sides. And similarly, ASML could expand quite a bit more quickly if they were really confident that the demand was there. And so far, TSMC and ASML actually, I think are still quite underestimating the demand for their I products from AI, but they're already making large and ongoing expansions. And the reason I bring up companies like Tesla and Amazon is they actually needed to make warehouses, make factories, and for many of the products that they consume.

So Tesla becoming a significant chunk of world demand for the kinds of batteries that they use. It can't be just an issue of reallocating resources from elsewhere because they wind up being a quite large chunk of their supply chain on many, many of these products. And they have to actually make physical things. They have to make factories, which is different from say some app being downloaded to more phones that already exist, or hiring a bunch of remote workers, something that's just redirecting. People actually make factories and make electric cars growing at incredible rates, rates that are in order of magnitude higher than these sort of typical growth rates that economists expect largely in recent decades and might tend to expect to continue.

Rob Wiblin
Yeah. One thing we haven't talked about almost at all is income distribution and wealth distribution in this new world. We've kind of been thinking about on average we could support x number of employees for every person given the amount of energy and given the number of people around. Now, do you want to say anything about how income would end up being distributed in this world? And should I worry that in this post AI world humans can't do useful work.

There's nothing that they can do for any reasonable price that an AI couldn't do better and more reliably and cheaper, so they wouldn't be able to earn an income by working. Should I worry that you'll end up with an underclass of people who haven't saved any income and a kind of shut out of opportunities to, to have a prosperous life in this scenario? So I'm not worried about that issue of unemployment, meaning people can't earn wages to support themselves and indeed have a very high standard of living. And just as a very simple argument, right now, governments redistribute a significant percentage of all of the output in their territories. And we're talking about an expansion of economic output of orders of magnitude.

Carl Shulman
So if total wealth rises 100 fold, 1000 fold, and you just keep existing levels of redistribution and government spending, which in some places are already 50% of GDP, almost invariably a noticeable percentage of GDP, then just having that level of redistribution continue means people being hundreds of times richer than they are today on average on earth. And then if you include off earth resources going up another million or billion fold, then it is a situation where the equivalent of Social Security or universal pension plans or universal distribution, that sort of tax refunds can give people what now would be billionaire levels of consumption. Whereas at the same time, a lot of old capital goods and old things you might invest in could have their value fall relative to natural resources or the entitlement to those resources once you go through. So if it's the case that a human being is a citizen of a state where they have any political influence, or where the people in charge are willing to continue spending even some portion, some modest portion of wealth on distribution to their citizens, then being poor does not seem like the kind of problem that people are facing. You might challenge this on the point of, well, natural resource wealth is unevenly distributed, and that's true.

So at one extreme, you have a place like Singapore, I think it's like 8000 people per square kilometer at the other end. So you're australian and I'm canadian, and I think they're at two and three people per square kilometer, something like that. So a difference of more than a thousand fold relative to Singapore in terms of the land resources. And so you might think you have inequality there. But as we discussed, most of the natural resources on earth are actually not even in the current territory of any sovereign state.

So they're in international waters. If heat emission is the limit on energy and materials harvesting on earth, then that's a global issue in the way that climate change is a global issue. And so if you wind up with heat emission quotas or credits being distributed to states on the basis of their human population, or relatively evenly or based on prior economic contribution or some mix of those things, those would be factors that could lead to a more even distribution on earth. And again, if you go off earth, the magnitude of resources are so large that if space wealth is distributed such that each existing nation state gets some share of that or some proportion of it is allocated to individuals, then again, it's a level of wealth where poverty or hunger or access to medicine is not the kind of issue that seemed important. I think someone might respond saying, in this world, countries don't need human beings to serve in their military, to protect themselves.

Rob Wiblin
That's all being done by robots. Countries don't need human beings to do work, to pay taxes or anything like that. So why would human beings maintain the kind of political power that allows them to vote in favor of welfare and income redistribution that would allow them to live a prosperous life? Now, admittedly, you might only need to redistribute 1% of global GDP in a somewhat even way in order for everyone to live in luxury. So you might only need very limited levels of charity or concern for people to be, you know, for whoever are the people who have the greatest level of power to be willing to just buy people out.

And we'll make sure that everyone at least has a pretty high standard of living because it's trivially cheap to do so. But yeah, there are a lot of questions about how is power distributed after this transition. And it seems like things could go in radically different directions in principle. Yeah. So in democracies, I think this would just be a very strong push for actually redistribution to, in a mature economy, to be higher than it is today.

Carl Shulman
Because right now, if you impose a very high taxes on capital investment and wages, you'll reduce economic activity, shrink the pie that's being redistributed. In a case where the industrial base just expands to the point of being natural resource limited, then there's actually minimal disincentive effect on just charging a market rate by auctioning natural resources off. So you remove these efficiency penalties of redistribution. And without that, and with at the same time, what would otherwise be like mass unemployment, or if not mass unemployment, where the wages earned in employment would be pathetic by comparison to what could be obtained by redistribution. So even if wages rise a lot, and maybe if the typical person can earn $500,000 a year in wages, but redistribution of land and natural resources revenue could give them $100 million a year income.

Then there would be a lot of political pressure to go for the latter option. And so in democracies, I think this would not be a close call. In dictatorships and oligarchic systems, I think it's much more plausible. So in some countries with large oil revenues, Norways or states like Alaska, you have fairly broad distribution of the oil revenues, provident management, but you have other countries where a narrow elite largely steals that revenue, often squirrels it away in secret bank accounts or otherwise channels it to corrupt purposes. And this reflects a more general issue of when dictatorships no longer depend on their citizenry to staff their militaries, to staff their security services, to provide taxes and industry, those checks against not just expropriating the population, reducing their standard of living, but even things like murder, torture, and just all kinds of abuses of the civilian population are no longer checked by some of these practical incentives and would depend more on the intentions of those with political power and to some extent, international, international pressure.

So that's something that could go pretty badly, but it's. And maybe also their desire to maintain the rule of law for their own protection. Perhaps you could imagine that you might be nervous about just expropriating everyone, or not following previously made agreements about how society is going to function, because you're not sure that that is going to work out well for you, necessarily. Yeah, that's right. Although these things can, different kinds of arrangements could be baked in.

If you think about the automated robotic police, those police could be following a chain of command where they ultimately obey only the president or the dictator, or maybe they respond to a larger body. They respond to also a parliament or politburo, maybe a larger electorate. But lots of different arrangements could be. Baked in and then made very difficult to change. Yeah, and once the basis by which the state maintains its power and enforces, everything can be automated and relatively set in stone or made resistant to opposition by any broader coalition, then there could be a lot of variance in exactly what gets baked in earlier.

And then international pressure would also come into play and things like immigration. So as long as people are able to emigrate, then that provides a lot of protection. You can go to other places that are super rich, and that's something where if you have some places that have more humanitarian impulses and others less so, that are very personalist dictatorships with callous leaders at least negotiating to allow the people there to leave is the kind of thing that doesn't necessarily cost nasty regimes that much. And so that could be the basis by which some of the abuses enabled by AI, automation of the apparatus of government, and really nasty regimes could be limited. Okay, that's a bit of a teaser, I guess, for the topics and challenges that we're going to come back to in part two of the conversation, where we're going to address epistemics and governance and cues and all of that.

Rob Wiblin
But for now, maybe let's come back to the economic side, which is our main focus this time around. I started this section by asking, why does any of this matter? Why do we need to be trying to forecast what this post AGI economy would look like now, rather than just waiting for it to happen? Is it possible to maybe come back and say, now that we've put some flesh on the bones of this vision, what are the most important aspects of people to have in mind? Maybe the things that you're most confident about, or the things that are potentially most likely to be decision relevant for decisions that people or our societies have to make in the coming years and decades of.

Carl Shulman
So the things I would emphasize the most are that this sort of fairly rapid transition and then the very high limit of what it can deliver, creates the potential for a sudden concentration of power. We talked about how geopolitically that could cause a big concentration and ex anti various parties who now have influence and power, if they foresee this sort of thing, should want to make deals to better distribute the fruits of this potential and avoid taking on huge negatives and risks from a sort of negative sum competition in that race. And so what concretely can that mean? So one thing is that, say countries that are allies of the leading AI powers and make essential contributions of various kinds, should want to have the capability themselves to see what is going on with AI that is being developed, to know how it will behave, loyalties and motivations, and that is such that they can expect the results are going to be good for all the members of that alliance or deal. So say the Netherlands.

The Dutch are the leaders in making EUV lithography machines. They're essential for the cutting edge chips that are used to power AI models. That's a major contribution to global chip efforts. And their participation, say, in the american export controls, is very important to their effectiveness. But the leading AI models are being built in american companies and under american regulatory jurisdiction.

And so if you're a politician in the Netherlands, while you right now are providing a lot to this AI endeavor, you should want assurances that has this technology really flowers if, say it flowers in the United States under a us security agents, that the resulting benefits can be shared, and that you won't find yourself in various ways treated badly or really missing out on the benefits. An example of that which we discussed is there are all these resources in the oceans and in space, that right now the international system doesn't allocate. And you could imagine a situation in which a leading power decides that since, well, it doesn't violate the territory of any sovereign state and it's made feasible by AI and robotics, they just create facts on the ground or in space and claim a lot of that. And so, since that AI effort is enabled by the contribution or cooperation or forbearance of many parties, they should be getting right now assurances, perhaps treaty assurances, that that sort of move will not be taken even if there is a large us lead in AI, and similarly for other kind of mechanisms that are enabled by AI. So if AI enables super effective political manipulations or interference in other countries elections, then assurances that leading AI systems won't be used in that way, and then building the institutional mechanisms to be clear on that.

So the Netherlands should be developing its own AI capabilities, such that it can verify the behavior and motives of models that are being trained, that they can have personnel present. If, say, data centers with leading AI models are based in the United States, and the US assures that these models are being trained in such a way that they would not participate in violations of international treaties or follow certain legal guidelines, then if us allies have the technical capabilities and have worked joining with the US to develop the ability to verify assurances like that over time, and other things like compute controls and compute tracking might help with that, then they can be assured that they will wind up with a fair share of the benefits of a technology that might enable unilateral power grabs of various kinds. And then the same applies to the broader world community. It applies also within countries. So we discussed earlier the absurdity that if things really proceed this fast, you may go from a world where AI is not central to economic, military power, governance, to a world where overwhelmingly all military power is mediated through AI and robotics, where AI and robot security forces can defend any regime against overthrow, whether that is a democratic regime or a dictatorship.

And all of this could happen within one election cycle. And so you need to create mechanisms whereby unilateral moves, taken advantage of this new, very different situation, require broad pluralistic support. So that could mean things like the training and setup of the motivations of AI systems at the frontier occurring within a regulatory jurisdiction, maybe require super majority support so that you're going to have to have buy in from opposition parties in democracies. Maybe you're going to have legislation passed in advance, setting rules for what can be done and programmed in these systems, and then have, say, supreme courts given immediate jurisdiction so that they could help assess some of these disputes involving more international allies. And in general, there's this potential for power grabs enabled by this technological, industrial, military transformation.

There are many parties who have things that they care about, interests and values to be represented, and low hanging fruit from cooperating. And in order to make that more robust, it really helps to be making those commitments in advance and then building the institutional and technical capacities to actually follow through on it. And that occurs within countries, occurs between countries, and ideally, it brings in the whole world and all the leading powers, states in general and in AI specifically. And then they can do things like manage the control and distribution of these potentially really dangerous AI capabilities and manage what might otherwise be an insanely fast transition and slightly slow it down enough to have even a modicum of human oversight, political assessment, negotiation, processing. And so all of that is basically to say, this is reason to work on pluralism, preparation and developing the capacity to manage things that we may not be able to put off.

Rob Wiblin
Yeah. Okay. So there's kind of states using their sudden strategic dominance to grab natural resources or to grab space unilaterally. Then there's just them using their military dominance to grab power from other states and ignore their interests. Then there's the potential for kind of power grabs within countries where a group that's temporarily a majority could try to lock themselves in for a long period of time.

And then there's the desire between different countries to potentially coordinate, to make things go better, and to give ourselves a little bit more time to think things through. I guess it all sounds great. At least one of them sounds a little bit difficult to me. The idea that the Netherlands would be able to assess AI models that the US is going to use and then confirm that theyre definitely going to be friendly to the Netherlands and that theyre not going to be substituted for something else, how would that work exactly? Because couldnt the US just change the model that theyre using?

How do you have an assurance that the deal isnt going to be changed just as soon as the country actually does have a decisive strategic advantage? One problem is, given an artificial intelligence system, what can you say about its loyalties and behavior? And this isn't in many way the same problem that people are worrying about with respect to rogue AI or AI takeover. You want to know if say there was an attempt at an AI coup or organized AI takeover effort, would this model in that unusual situation, which is hard to generate and expose it to in training in a way that's compelling to it, would it join that revolution or that coup? And then you have the same problem potentially with AI's that are say, designed to follow the laws of a given country or to follow some international agreements or some terms jointly set by multiple countries.

Carl Shulman
Because if there is a backdoor or poison data, so that in the unusual circumstance where say there is a civil war in country x, will it side with party a or party b? If the, if there's some situation where the chief executive of a given company is in conflict with their government, will these AI's in that unusual circumstance side with that executive against the law? And similarly between states? If you have AI's that apparently, and where inspectors from multiple states were involved in seeing and producing the code from the bottom up and then inspecting the training data being put in, if they can figure out from that that no, there's no circumstances under which the model would display this behavior, then you're the in good shape with respect to rogue AI takeover relatively, and for this AI, enabling a coup or power grab by some narrow faction within a broader coalition supporting this AI development. And it's possible that some of those technical problems will just be very difficult to solve.

So we haven't solved that problem with respect to large pieces of software. So if Microsoft intends to produce exploits and backdoors in windows, it's unlikely that states will be able to find all of them. And intelligence agencies find a lot of zero day exploits, but not all the same ones as each other. And so that might be a difficult situation. Now in that case, it may be possible to jointly construct the code and datasets, even though you couldn't detect a backdoor in the completed product, you might be able to inspect all the inputs in creating the thing and ensure there was no backdoor there.

If that doesn't work, then you get to a position where, well, at best you can share the recipe, very simple and clear for training up an AIH. You wind up with a situation where trust and verification is then about these different parties having their own AI's, which could enable weapons of mass destruction or otherwise have issues with providing to everyone. But maybe some number of states get these capabilities simultaneously. All participants in some AI development project get the latest AI models and they can retrain them using these shared recipes to be sure that they don't have contain backdoors in their local copy. And then that setup maybe will have more difficulties than if you have just one single AI.

And everyone has ensured that AI is going to not do whatever it's told by one of the participants, but it's going to follow a set of rules set by the overall deal or international organization or plan. But I mean, these are the sort of options to explore. And when we ask why with the mature AI technology, why can't one then just do whatever with it, how do you, how do you get on? We're talking about AI's that are as capable as people. They're capable of, say, whistleblowing on illegal activity.

If, say, there's an attempt to steal or reprogram the AI from a joint project and eventually get to the point when we're talking about an automated economy with thousands of robots per human at that point, ultimately the physical defense and such is already having to be alienated to machines. And it's just a matter of what are the loyalties of those machines? How do they deal with different legal situations, with different disputes between governing authorities, each of which might be said to have a claim, and what are the procedures for resolving that? So let's push on now and talk about economists and the intelligence explosion. So we've just been, I guess, kicking the tires a bit on this vision of a very rapid change to an AI dominated economy and how that transition might play out and how that economy might look.

Rob Wiblin
We've done some other episodes on that. As we've mentioned, there's 100 episode 150 Tom Davidson on how quickly AI could transform the world. This episode 161 with Michael Webb on whether AI will soon cause job loss, lower incomes and higher, higher inequality. If people want to go and listen to some more content on that topic. But it is interesting and a bit notable that I think economists, while they've become more curious about all of this over the last year or two, in general they remain fairly skeptical.

There are not a lot of economists who are basically painting the vision of the future that you have. So I think it'd be very interesting to explore why it is that you have reasonably different expectations than typical mainstream economists and why it is that you're not persuaded by the kind of counter arguments that they would offer. We've covered a decent number of counter arguments that I have generated already, but I think there's even other ones that we've barely touched on that economists tend to raise in particular. So, first, could you give us a bit of a lay of the land? What is the range of opinions that economists express about these intelligence explosion and economic growth explosion scenarios?

Carl Shulman
So I'll say my sense of this, based on various pieces of evidence, is that while AI scientists are pretty open to the idea that automating R and D, as well as physical manufacturing and other activities, will result in an explosive increase in growth rates in technological and industrial output, and there's surveys of AI conference attendees and AI experts to that effect. This view seems not to be widely shared among economists. Indeed, the vast majority of economists seem to assign extremely low probability to any scenario where growth even increases, again by as much as, say, it did during the industrial revolution. So Tom Davidson, who you had on the show, defines explosive growth with this measure of 30% annual growth in economic output. And you can say modulo things like pandemic recovery or some other things of that sort.

But that seemed to be something the vast majority of economists, particularly growth economists, and even people interested in current AI and economic things, their off the cuff casual response is to say no way. And when asked to forecast economic growth rates, I think is to not even consider the possibility of growth rates that are much greater than existing. And you hear people say, oh, yeah, maybe that would, maybe having a billion times the population of scientists would increase economic growth from 4% to 6%, or maybe this would be how we would keep up exponential growth, things like that. And, yeah, it's a pretty dramatic gulf, I think, between the economists and the AI scientists. And there's a very dramatic gulf between my sense of these issues and the model we discussed, and indeed, a lot of the sort of explicit economic growth models, how they interact with adding AI to the mix, theoretically.

And I know you've had, through engagement with some economists who have looked at those things. And so there's a set of intuitions and objections that lead economists to have the casual response, this is not going to happen, even while most of the models of growth tend to suggest there would be extreme explosive growth given AI. Yeah. Okay, so I think, fortunately, you're extremely familiar with the kinds of responses that economists have and the different lines of argument here. So maybe let's go through them one by one.

Rob Wiblin
What's maybe the key reason, the best main reason that economists and other similar professionals might give for doubting that there'll be such a serious economic takeoff? Well, before I get into my own analysis, I think I should just refer to a paper called explosive growth from AI. Automation. A review of the arguments, and this is bye. Actually, two people who work at epoch one, also at MIT future tech.

Carl Shulman
And so that paper goes through a number of the objection they've most often heard from economists to the idea of such 30% plus growth enabled by AI. And then they do quantitative analyses of a number of these arguments. And I think it's quite interesting, they show that a lot of these off the cuff responses. It's quite difficult to actually fit in parameter values where the conclusion of no explosive growth follows from that. I'd recommend that paper, but we can go through the pieces now as well.

Rob Wiblin
Yeah, that sounds great. What's maybe one of the, the first arguments that they look at? So I'd say the biggest ones are Baumol effect arguments. That is to say that there will be some parts of the economy that AI does not enhance very much, and those parts of the economy will come to dominate because the parts that AI can address very easily will become less important over time in the way that agriculture used to be the overwhelming majority of the economy, but today is only a very small proportion. And so those Baumol arguments have many different forms and we can sort of work through them with different candidates for what will be this thing that AI is unable to boost or boost very much.

Carl Shulman
And then you need to make an argument from that. This bottlenecking will actually prevent an increase in economic output that will satisfy this explosive growth criterion. Yeah. So just to explain that term, Baumol effects, the classic Baumol effect is that when you have different sectors of the economy, different industries, the ones that see very large productivity improvements, the price of those goods tends to go down, and the value of incremental increases in the productivity in those industries tends to become less and less, while other industries where productivity growth has been really slow, those become a larger and larger fraction of the economy. And I guess in the world that we've been living through, the classic one there you mentioned is, well, agriculture has become incredibly more productive than it was in the past, but that means that now we don't spend very much money on food.

Rob Wiblin
And so further productivity gains in agriculture just don't pack as large a punch as they would have back in 1800 when people spent most of their income on food. And by contrast, you've got other sectors like education or healthcare, where productivity gains have been much, much smaller. And for that reason, the relative price of goods and the relative value of output from the healthcare sector and the education sector has gone way up relative, say, to the price of manufactured goods or agriculture, where productivity gains have been very big. And I think that the basic idea for why that makes people skeptical about an AI fueled growth explosion is that, sure. Well, let's say if you could automate and radically increase productivity in half of the economy, that'll be all well and good, and that would be valuable.

But the incremental value of all the things that you're making in that half of the economy will go way down because we'll just have so many of them already, and you end up with bottlenecks and a lack of production in other sectors where we weren't able to use AIH to make things to increase output and increase productivity. Yeah. Do you want to take it from there? What are the different candidates that people have in mind for these bowl effects where AI might increase growth, but it's going to be held up by the areas where it's not able to release the bottlenecks? There are many candidates, so we'll work through a few of them in succession.

Carl Shulman
There's a class of objections that basically involve denying the premise of having successfully produced artificial intelligence with human like and superhuman capabilities. So these would be arguments of the form, even if you have a lot of, say, great R and D, you still need marketing or you still need management, or you still need entrepreneurship. And the response to those is to say, well, entrepreneurship and marketing and management are all jobs that humans can successfully do. And so if we are considering cases where the AI enterprise succeeds, you have models that can learn to do all the different occupations and the way that humans learn to do all the different occupations, then they will be able to do marketing, they will be able to do management, they will be able to do entrepreneurship. And so I think this is important in understanding where some of the negative responses come from.

And I think there's evidence from looking at the comments that people make on some of the surveys of AI experts and whatnot that have been conducted at machine learning conferences and whatnot, that it's very common to substitute a question about advanced AI that can learn to do all the tasks humans can do with something that's closer to existing technology. And people take a limitation of current systems. So, for example, currently, AI has not advanced as much in robotics. Hazard has in language, although there has been some advancement. And so if you say, well, I'm going to assume that the systems can't do robotics and physical manipulation, even though that is a thing that humans can learn to do both the task of doing robotics research and remotely controlling and controlling bodies of various kinds.

So I'd say this is a big factor. It's not theoretically interesting, but I've had multiple experiences with quite capable smart economists who initially had the objection, no way, you can't have this sort of explosive growth. But it turned out that ultimately they were implicitly assuming that it would fail to do many jobs and many tasks that humans do. And then some of them have significantly revised their views over time, partly by actually considering the case in question. Yeah.

Rob Wiblin
How do economists respond when you say, well, you're not taking the hypothetical seriously? What if it really could do all of these jobs? The AI was not just drawing pretty pictures like Dali, it was also the CEO. It was also in all of these roles. And you never had any reason to hire a human at all.

Carl Shulman
Well, often they might say, well, that's so different from current technology that I actually don't want to talk about it. It's not interesting. It's interesting to me. I think it is interesting because of the great advances in AI, and indeed a lot of people, for good reason, think we might be facing that kind of capability soon enough. And it seems really it's not the bailiwick of economists to say that technology can't exist because it would be very economically important.

There's sort of a reversal of the priority between the sort of physical and computer sciences and the social sciences. But, yeah, that's a big issue. And I think all of this is that very few economists have spent much time attending to these sort of considerations. And so it often is an off the cuff response. Now, I know you had Michael Webb on the podcast before, who is familiar with these AI growth arguments, and does take, I think, a much more high growth kind of forecast than the median economist, but I think would be skeptical of the growth picture that we've talked about.

And so this is a first barrier to overcome, and I think it's one that just will naturally change. How does AI technology advances? Economists will start more to think about really advanced technologies, partly because the gap between current and advanced technologies will. Will decline, and partly because the allergy to consider extrapolated versions of the technology would tend to decline. Okay, so there's some sort of responses or some sort of Baumol effects that people point to that are basically just denying the premise of the question that AI could do all of the jobs that humans could do.

Rob Wiblin
But are there any others that are more plausible that are worth talking about? Yeah, there's a version that's not exactly identical, which is to deny that robots can exist. So assuming that AI will forever remain disembodied. And so this argument then says, manual labor is involved in a large share of jobs in the economy. So you can have self driving cars, but truck drivers also will do some lifting and loading and unloading of the truck.

Carl Shulman
Plumbers and electricians and carpenters have to physically handle things. And if you take the assumption of, oh, let's consider AI that can do all the brain tasks, which would include robot control, but then you say, yeah, but people can't make robots that are able to be dexterous or strong or have a humanoid appearance, then you can say, well, those jobs already make up a big chunk of the economy is a minority. Most wages are not really for lifting in physical emotions. So management, engineers, doctors, all sorts of jobs could be done by a combination of skilled labor, phones provided eyes, ears, and whatnot. And then you have some manual labor to provide hands for the AI system.

And I talk about that with Dorkesh, but still, eventually, even though that it looks like, would allow for an enormous economic expansion relative to our society. If you couldn't make robots, then eventually you'd wind up with a situation where every human worker was providing hands and basically bodily services to enable the AI cognition to be applied in the real world. I see, okay. And what's the reason why you think that's not a super strong counterargument? I imagine that it's because we will come up with robots that will be able to do these things, and maybe there'll be some delay in manufacturing them.

Rob Wiblin
I guess you imagine that scenario, or you talk about that scenario in the podcast with Waukesh, wherever the mental stuff comes first, and then the robots come a bit later, because it takes a while to manufacture lots of them. But there's no particular reason to think that robots will forever, robots that are capable of doing the physical things that humans can do will forever remain out of reach. Yeah, we can extrapolate past performance improvements there and look at physical limits and biological examples to say a lot of things there, and then also making robots with humanoid appearance, which is really not relevant to this sort of the core industrial loop that we were talking about, but expanding energy, mining, computers, manufacturing military hardware, which is for geopolitics, and strategic planning, where I'm particularly interested. But also, that's not something, it seems to me, that would be indefinitely insoluble. So the arguments one would have to make, I think, would instead go more at the level of the payback times.

Carl Shulman
We were talking about how much production of machines and robots and whatnot, how much time operating does it take for them to replicate themselves or to acquire the energy involved in their production and whatnot? And so if you made an argument that we are already at the limits of contra appearances, of manufacturing, robotics, solar technology, we can never get anywhere close to the biological examples. And even though there's been ongoing and substantial progress over the last decades and century, we're really almost at the end of it. Then you can make an argument that, well, that physical infrastructure, maybe it could double in a year, maybe try and push for it to say, well, more like two years, four years. I think this is difficult, but it's less pinned down immediately by economic considerations that people will necessarily have to hand.

Rob Wiblin
Are there any other plausible things that, like inputs where we might struggle to get enough of them quickly enough, or some stage in the replication where that could really slow it down? I mean, one that slightly jumps to mind is currently building fabs. To make lots of semiconductors takes many years. It's a quite laborious process. So in this scenario, we have to imagine that AI technology has advanced.

The advice that it's able to give on how to build fabs and how to increase semiconductor manufacturing is so good that we can figure out how to build many, many more of these fabs much, much faster than we're able to now. Maybe some people just have a kind of intuitive skepticism. That is something physically that can be done, even if you have quite a lot of robots in this world. A few things to say about that. One is historically, again, there have been rapid expansion of the production of technologically complex products.

Carl Shulman
And so as mentioning a number of companies have done 30% or 50% expansion year after year for many years. And now companies like ASML and TSMC, in expanding that, they generally do not expand anywhere close to the theoretical limits of what is possible. And a fundamental reason for that is those investments are very risky. ASML and TSMC, even today, I think they are underestimating the scope of growth in AI demand. TSMC, earlier in 2023 said they had 6% of their revenue was from AI chips, and they expected in five years that to go into the teens.

I expect it will be more than that. And then they were wary about overall declines in demand, which was sort of restricting their construction, even though they are building new fabs now, in part with government subsidies. But in a world like this, with this very rapid expansion, there's not that much worry that you won't be able, you won't have demand to continue the production process you have. You're having unbelievable rates of return on them. And so, yeah, you get that intense investment.

And then secondly, one of the biggest challenges and sort of quick scale up of these companies is the expansion of their workforce. And that's not a shortage of human bodies in the world. It's a shortage of the necessary skills and training. And so if humans are basically providing arms and legs to AI's until enough robots are constructed, as they work in producing the fabs, and as they work in producing more robots and robot production equipment, then unlimited peak engineer skills means that barrier to expansion of the companies, and one of the dangers of expansion. And we hire people.

If you then have to fire them all after a few years, if it turns out demand is not there, that's especially rough. And then there's just intrinsic delays from getting them up to speed and, and recruiting them, having to move all of that. So fixing that is helpful, and then applying superhuman skills at every stage of the production process. The world's best workers, who understand every aspect of their technology and every other technology in the whole production chain, are going to see many, many places to improve the production process. The six sigma manufacturing to the extreme, they won't have to stop for bricks.

There'll be no sleep or off time. And so earlier parts of the supply chain that are not on full speed, 24/7 continuous activity, there's an opportunity to speed things up there and then just developing all sorts of new technologies and then applying them in whatever ways most expedite the production process. Because in this world, there are different trade offs where you much prefer designs that err in the direction of being able to make things quickly, even if in some ways they might be less efficient over a ten year horizon. You mentioned that there's a degree of irony here, because economists own classic growth models seem to imply that if you had physical capital that could do everything that humans currently do, and you could just manufacture more of it, that would lead to radically increased economic growth. Do you want to elaborate on that, on what classic economic growth models have to say?

Yeah, just standard models have labor capital, maybe technology, maybe land. And then generally they model growth in the sort of near short term, with labor population being approximately fixed. But then capital can be accumulated, you can keep making more of it. And so people keep investing in factories and machinery and homes until the returns from that are driven low enough that investors aren't willing to save money for, say, if real interest rates are 2%, a lot of people aren't willing to forego consumption. Now, in order to get a 2% return.

But if real returns are 100%, then a lot of people will save, and those who do save will quickly have a lot more to reinvest. And so the basic shift is moving labor, which normally the bottleneck in these models from being a fixed factor to one that is accumulated and indeed is accumulated by investment, and where it just keeps growing until its marginal returns decline to the point where investors are no longer willing to pay for some more.

And then models that try to account for the historical huge increases in the rate of economic and technological growth, the model that explain it by things changing, they tend to be these semi endogenous growth models accounting for that. Look to things like, well, you increased the share of activity in the economy that was being dedicated to innovation drastically, and you had a larger population that could support more innovation, and then you accumulate ideas and technology that allow you to get more out of the same capital and labor. And so that goes forward. And of course, more people means you can have more capital matches and more output. And so, yeah, I mean, there are various papers on AI and economic growth you can look at.

And those papers talk about ways in which this could fail or be for a finite time. And of course it would be for a finite time, you hit natural resource limitations and the various things. But yeah, they tend to require that you throw in cases where, no, the AI really isn't successfully substituting or where these really extreme elasticities and people are uninterested in, say, having a million times as much energy and I machinery and housing. And yeah, in the explosive growth review paper that I mentioned earlier, they actually explore this. And what values, parameter values, can you plug in about the substitution between goods that AIA is enhancing and not enhancing for different shares of the economy can be automated and winds up being that you need to put pretty implausible values about how much people value the thing to avoid a situation where total GDP rises by some orders of magnitude from where we are right now.

And if you look backwards, we had Baumol effects with agriculture and the industrial revolution. And so, okay, okay. And yet now we're hundreds of times richer than we were then. So even if you're going to say, okay, yes, Baumol effects reduced or limited the economic gains from automating sectors that accounted for the bulk of the economy. Doing the same thing again should again get us big economic gains.

And we're talking about something that automates a much larger share, especially in log terms of the economy, than those transitions did. It sounded like you were saying that to make this work in these models you have to put in some value that suggests that people don't even want more income very much, that they're not interested in achieving economic growth. Did I understand that right? In these you have to say that the sectors where AI can produce more. Which is all of them, right?

Well so there are some things that so like historical artifacts. So yes the AI's and robots could do more archaeology and find lost things, but there's only one original Mona Lisa. And so if you imagine a society where the only thing anyone cared about was timeshare ownership of the Mona Lisa. Hey, I can't help us. They would be unwilling to trade off 1 hour of time viewing the original Mona Lisa for having a planet planet sized palatial thing with their own customized personal Hollywood and software industry and pharmaceutical industry.

I mean that's the ultimate extreme of this kind of argument. But you can have something in between that feels less absurd, though it still sounds like it would be absurd. The thing that makes it especially problematic is going through all of the jobs in the economy and just trying to characterize where are these sectors with the human advantages. And if those sectors start off being a very very small portion, by the time those grow to dominate, if they ever would, and you need to tell a story for that, then you would have to have a huge economic growth because people are expanding their consumption bundle by very much and all of these things improved. And then if there was this one thing that was say 1% of the economy to start and then it increases its share to 99% and everything else has gone up thousand fold, 10,000 fold, well it seems like your consumption basically got to go up by 100 fold or more on that front and depend on the substitution a lot more.

Rob Wiblin
Another thing is presumably all of the science and technology advances that would be happening in this world where we have effectively tens of billions of incredible researchers running on our computer hardware, they would be coming up with all kinds of new amazing products that don't even exist yet, that could be manufactured in enormous amounts and would provide people with enormous wellbeing and satisfaction to have. So the idea that the entire economy would be bottlenecked by these strange boutique things that can't be made, that you can't make any more of, sounds just crazy to me. So one exception is time. If you're objecting to fast growth, if you thought that some key production processes had serial calendar time as a critical input, then you could say oh well, that's something that is lacking in a world even with enormously greater industrial and research effort. You can't have nine people have one baby in one month rather than nine months.

Carl Shulman
And so this holds down the peak human population growth rate. Through ordinary reproduction at around 4% per annum, you could imagine another species, say, octopuses, they could have hundreds of eggs and then have a biological limit on population growth that was more in the hundreds of percent or more. And so this is just, it really could matter if there were some processes that were essential for, say, replicate in a factory. You needed to wait for a crystal to grow, and the crystal requires n days in order to finish going. You know, you heat metal, and it takes a certain number of minutes for the metal to cool.

You could tell different stories of this sort. And sometimes people make the claim that physical experiments in the sciences will pose tight restrictions of this sort. And now that's going to be true for something like wait 80 years to see what happens in human brain development, rather than looking at humans who already exist, or growing tissues in vitro, or doing computer simulations and things like that. And so that's a place where I'd look for. Yeah, this is actually a real restriction in the way that human gestation and maturation time wound up being a real restriction, which only bound once growth was starting to be on the time scale where that would matter, when technological growth was maybe a doubling every thousand years.

There's no, there's no issue with human population catching up to the technology on a timescale that is short relative to the technological advancement. But if the technological doubling is 20 years, and even the fastest human population growth is 20 years, then it starts to bind, and if it goes to monthly, that human population can't keep up. Robot population, I think can. But you could argue, will there be processes and haven't found good candidates for this? But I welcome people to people to offer more proposals on that.

Rob Wiblin
Okay, well, yeah, on a couple of those, in terms of. So maybe a crystal takes a particular amount of time to grow. Well, very likely, if that was holding up everything, we would be able to find an alternative material that we could make more quickly that would fill that purpose, or you could just increase the number that you're producing at any point in time on humans. Yes, it is true that humans, because we're this mechanism that humans didn't create, we kind of precede that, and we don't fully understand how we work. It's not very easy for us to re engineer humans to grow more quickly and to be able to reproduce themselves at more than 4%.

But of course, if we figured out a way of running human beings on computers, then we could increase their population growth rate enormously. Hypothetically, I think it is true with, and I love the point of metal cooling. You'd think, well, couldn't you come up with a better, like, if that was really the key thing, couldn't you find some technology that would allow you to cool down materials more quickly in cases where it's really urgent? It does seem more plausible in the case of that. There could be some experiments in the physical sciences and I guess in the social sciences that could take a long time to play out and would be quite challenging to speed up.

So I don't know that one stands out to me as a more interesting candidate. Yeah. So for the physical technologies that we're talking about, a lot of chemistry and materials science work can be done highly in parallel. And there's evidence that, in fact, you can get away with quite a lot using more sophisticated simulations. So the success of alphafold in predicting how proteins will fold is an early example of that.

Carl Shulman
And I think broader applications in chemistry and materials science combined with highly parallel experiments and do them 24/7 plan them much better with all of the sophisticated cognitive labor, I think that goes very far and is not super binding, and then just many, many things can be done quickly. So software changes, process re engineering. Yeah. Restructuring how production lines and robot factories work, that sort of thing. You could go very far in simulation, in simultaneous and combinatorial experiments.

And so this is a thing to look for, but I don't see yet a good candidate for a showstopper to fast growth on that front. Yeah. Okay. We spent quite a bit of time on this Baumol new bottlenecks issue, but I suppose that makes sense because it's a big cluster, an important cluster. Maybe let's push on.

Rob Wiblin
What's another cluster of objections that economists give to this intelligence explosion idea? Yeah, in some ways it's a, it's an example of that. I mean, really, the Baumol effect arguments are something, there will be something where AI can't do very much. And so every supposed limitation of AI production capabilities can to some extent fit into that framework. So we're saying you could fit regulatory barriers.

Carl Shulman
So there's regulatory bans on all AI, and then if you had regulations banning applications of AI or banning robots or things like that, you could partly fit that into a biomol framework, although it's a distinctive kind of mechanism. And then there's a category of human preference objections. So this is to say that just as some consumers today want organic food or historical artifacts, the original Mona Lisa. They will want things done by humans. And sometimes people will say they'll pay a premium for human waiters.

Rob Wiblin
Right? So, yeah, I've heard this idea that people might have a strong preference for having services provided by human beings rather than AI's or robots, even if the latter seems superficially much better at the task. Can you flesh out what are people driving out with that? And do you think there's a significant punch behind the effect that they're pointing to there? Yeah.

Carl Shulman
So if we think about the actual physical and mental capacities of a worker, then the AI and robot provider is going to do better on almost every objective feature you can give, unless it's basically like a pure taste based discrimination. So I think maybe it was Tim Berners Lee gave an example saying there will never be robot nannies. No one would ever want to have a robot take care of their kids. And I think if you actually work through the hypothetical of a mature robotic and AI technology, that winds up looking pretty questionable. Think about what do people want out of a, out of a nanny?

So one thing that they might want is just availability. So it's better to have round the clock care and stimulation available for a child. And so in education, one of the best measured real ways to improve educational performance is individual tutoring instead of large classrooms. And so having continuous availability of individual attention is good for a child's development. And then we know there are differences in how well people perform as teachers and educators and in getting along with children.

And if you think of the very best teacher in the entire world, the very best nanny in the entire world today, that's significantly preferable to the typical outcome quite a bit. The performance of the AI robotic system is going to be better on that front. They're wittier, they're funnier, they understand the kid much, much better. Their thoughts and practices are informed by data from working with millions of other children. It's super capable.

They're never going to harm or abuse the child. They're not going to kind of get lazy. When the parents are out of sight, the parents can set criteria about what they're optimizing. So things like managing risks of danger, the child's learning, the child's satisfaction, how the nanny interacts with the relationship between child and parent. So you tweak a parameter to try and manage the degree to which the child winds up bonding with the nanny rather than the parenthood.

And then the robot nanny optimizing over all of these features very well. Very determinedly and delivering everything superbly, while also being fabulous. Medical care in the event of an emergency, providing any physical labor as need be, and just the amount you can buy. If you want to have 24/7 service for each child, then that's just something you can't provide in a economy of humans, because one human cannot work 24/7 taking care of someone else. Because at the least you need a team of people who can sub off from each other, and that means there's going to be, that's going to interfere with the relationship and the knowledge sharing and whatnot.

You're going to have confidentiality issues, so the AI or robot can forget information that is confidential. A human can't do that anyway. We stack all these things with a mind that is super charismatic, super witty, that can have probably a humanoid body. That's something that technologically does not exist now, but in this, in this world with demand for it, I expect would be met. And so, yeah, basically most of the examples that I see given of here is the task or job where human performance is just going to win because of human tastes and preferences.

When I look at the stack of all of these advantages and the costs that the world is dominated by nostalgic human labor. If incomes are relatively equal, then that means for every hour of these services you buy from someone else, you would work a similar amount to get it. And it just seems that isn't true. Like, most people would not want to spend all day and all night working husband nanny for someone else's child, doing a terrible job in order to get a comparatively terrible job done on their own kids by a human instead of a being that is just wildly more suitable to it and available in exchange for almost nothing by comparison. Yes, when I hear that quote, there will never be robot nannies.

Rob Wiblin
I don't even have a kid yet, and I'm already thinking about robot nannies and jaspereze to hire a robot nanny and hoping that they'll come soon enough that I'll be able to use them. So I'm not quite sure what model is generating that statement. It's probably one with very different empirical assumptions. I think the model is mostly not buying hypotheticals. I think it shows that people have a very hard time actually fully considering a hypothetical of a world that has changed from our current one in significant ways.

Carl Shulman
And there's a strong tendency to substitute back, say, today's AI technology. Yeah, our first cut of this would be to say, well, the robot nannies or the robot waiters are going to be vastly better than human beings. So the great majority of people presumably would just prefer to have a much better service. But even if someone did have a preference, just an arbitrary preference, that a human has to do this thing and they care about that intrinsically and can't be talked out of it, and even the fact that everyone else is using robot nannies doesn't switch them, then in order someone has to actually do this work. And in the world that you're describing, where everything is basically automated and we have AI at that level, people are going to be extraordinarily wealthy, as you pointed out, typically, and they're going to have amazing opportunities for leisure, substantially better opportunities for leisure, presumably given technological advances than we have now.

Rob Wiblin
So why are you going to go and make the extra money, like, give up things that you could consume otherwise, in order to pay a person to do like, another person who's also very rich or like, also has great opportunities to spend their time having fun, to do a bad job taking care of, taking care of your child, so you can take your time away from having fun, to do a bad job taking care of their kid, like, systematically, or. It just doesn't make sense as a cycle of work. It doesn't seem like this would be a substantial fraction of how people spend their time. Yeah, I mean, you could imagine Jeff Bezos and Elon Musk serving as waiters at one another's dinners in sequence because they really love having a billionaire waiter. But in fact, no, billionaires blow their entire fortunes on having other billionaires perform little tasks like that for them.

Yeah, yeah, yeah. Okay, so as you pointed out, this sort of new bottlenecks Baumol effects thing can, like many different things can be shoved into that framework. And maybe another one would be that. Sure, AI's could be doing all of the roles within organizations. They could be making all of the decisions as well as or better than human beings are or could.

But for some period of time at least, we won't be willing to hand over authority and decision making power to them. So, you know, integration of AI into big businesses could be delayed substantially by the fact that we don't feel comfortable just firing the CEO and replacing them with an AI that can do a better job and make all the decisions much faster. Instead, we'll actually keep humans in some of these roles. And it's the, you know, the slow ability of the human CEO to figure out what things they want the company to be doing that will set the brakes or that will make more gradual the integration of AI into or into all of our most important institutions. What do you think of that story?

Carl Shulman
Well, management, entrepreneurship and the like are clearly extremely important. Management captures very high wages and is quite a significant chunk of labor income given the percentage of people who are managers. So it's true that while AI is not capable of doing management jobs, those will still be important. But when the technology is up for the task and increasingly up for the task of, then those are actually some of the juiciest places to apply AI. Because the wages are high in those fields, the returns are high to them.

And so if it's the case that by letting AI manage my business or operate this new startup, you're going to yield much higher returns to stockholders, stay in business rather than going bankrupt, then it seems that there's a very strong incentive. Even if there was a legal requirement, say, that certain decisions be made by humans, then just as you're starting to see today, then you have a human who will rubber stamp the decisions that are fed to them by their AI advisors. And both CEO's and politicians all the time are signing off on memos and work products created by their subordinates. And to the extent that, yeah, again, you have these kinds of regulations that are severely impairing productivity, then all of the same sorts of pressure that would lead to AI being deployed in the first place, pressure for allowing AI to do these kinds of restricted jobs, especially if they're very valuable, very high return. Yeah.

Rob Wiblin
So I can imagine that there would be some companies that are more traditional and more skeptical of AI that would drag their heels a bit on replacing managers and important decision making roles with AI. I imagine once it's actually demonstrated by other bolder organizations or more innovative organizations, that in actual fact, in practice it goes well. And we're making way more money and we're growing faster than these other companies because we have superior staff. It's hard to see that that would hold for a long period of time, that eventually people will just get comfortable with it as they get comfortable with all new technologies and strange things that, strange things that come along, theyll get comfortable with the idea that AI can do all of these management roles. Its been demonstrated to do a better job.

And so it would be irresponsible not to fire our CEO and put a relevant AI in charge. So youve written that you suspect that one of the reasons for the high level of skepticism among economists, indeed much higher among economists than other professionals or AI experts or engineers or anything like that, is that the question is triggering them to use the wrong mental tools for this particular job. We've mentioned two issues along those lines earlier on when discussing possible objections to your vision. One was focusing a great deal on economic growth over the last few years or decades, and drawing lessons from that while paying less attention to how it has shifted over hundreds or thousands of years, which maybe teaches almost the opposite lesson. Another one is extrapolating from the impact of computers today.

And there you pointed out that until recently, the computational power of all the chips in the world was much smaller than the computational power of all of the human brains. So it's not so surprising that it hasn't had such a huge impact on the delivery of cognitive labor. But exponential growth in computing power and efficiency in manufacturing means that pretty soon that all the chips category is going to approach and then overtake humanity in terms of its aggregate computational ability, and then pretty soon it will radically outstrip it, at which point we could reasonably expect the impact to be pretty different. Is there another classic observation or heuristic that you suspect might be leading economists astray here, in your view? One huge element, I think, is just the history of projections of more robust automation than happens.

Carl Shulman
We talked about computers, but also in other fields. There's a history of people being concerned, say, that automation would soon cause mass unemployment or huge reductions in hours worked per week that were exaggerated. Hours per person worked have declined, but not nearly as much as, say, Keynes might have imagined when he thought about that. And there have been, at various, various other points, sort of government interest in commissions in response to the threat of possible increased automation on jobs. And in general, the public has a tendency to see many economic issues in terms of protecting jobs.

And economists think of them as well. If you have some new productive technology, eliminates old jobs, and then those people can work on the other jobs and there's more output. And so the idea that AI and automation will be tremendously powerful or sort of COVID all tasks is one that has been false. I mean, other reasons, because all these cognitive tasks cannot be done by machines. And so freeing up labor from various physical things, cranking wheels, lifting things, then free them up to work on other things, and then I overall output increases.

And so I think the history of arguing with people who are eager to overstate the impact of partial automation without taking that into account, I think then can create an allergic reaction to the idea of AI that can automate everything or that can cover all tasks and jobs, which may also be something that contributes to people substituting the hypothetical of AI and robots that don't actually automate all the jobs, even when asked about that topic, because so often in the past there were members of the public who were being confused in that direction. And so, you know, imagine your econ 101 undergraduates. This would be a kind of thing that you have to educate them about year after year. And so I'd say that's a contributing factor. Yeah, this is one that I've encountered an enormous amount where I think economists, I guess my training was in economics.

Rob Wiblin
We're so used to lecturing the public that technology does not lead to unemployment in general, because sure, you lose some jobs, but you make some other ones. There'll be like new technologies that are complementary with people. So people will continue to be able to work roughly about as much as they want. I think economists are spent the last 250 years trying to hammer this into the public's mind. And now I think you have a case where actually this might change, maybe for the first time.

It's going to be a significant change because you have a technology that can do all of the things that humans can do more reliably, more precisely, faster, cheaper. So why are you hiring a human? But of course, I guess economists see this conclusion coming, or it's directly stated, and just because every time so far that has been wrong, there's just an enormous intuitive skepticism that that can possibly be right this time. So on the job loss point, I think something that is a little bit unusual or a bit confusing to me, even about my own perspective on this, is that I think that over the last year doesn't seem like AI progress has caused a significant loss of jobs outside maybe, I don't know, copy editors and some illustrators. And I think probably the same thing is going to be true over the next year as well, despite rapidly improving capabilities.

And I think a big part of the reason for that is that managers and human beings are a big bottleneck right now to figuring out how do you roll out this technology? How do you incorporate it into organizations? How do you manage people who are working on it right now? I think that argument is quite a strong reason to think that deployment of AI is going to go much slower than it seems like in principle it ought to be able to. Applications are going to lag substantially behind what is theoretically possible.

But I think there's a point at which this changes where the AI really it can do all of the management roles. The AI is a better CEO than any human who you could appoint would be, at which point, the slowness of human learning about these technologies and the slowness of our deliberation about how do you incorporate them into production processes is no longer really a binding constraint, because you can just hand over the decision about how to integrate AI into your firm over to an AI who will figure that out for you. So you can get potentially quite a fast flip once AI is capable of doing all of the things, rather than just the non management and non decision making things, where suddenly at that point, the rollout of the technology in production can speed up enormous. Is that part of your model of how this will work as well? So I think that is very important.

Carl Shulman
If you have AI systems with similar computational capabilities that can work in many different fields, then naturally they will tend to be allocated towards those fields where they generate the most value. And so if we think about the jobs in the United States that generate $100 per hour or more, or $1,000 per hour per more, they're very strongly tending to be management jobs on the one hand, and then jobs that involve detailed technical knowledge. So lawyers, doctors, engineers, computer scientists. So in a world where AI capabilities explosion is ongoing, there's not enough computation to supply AI for every single thing yet. Then if it's the case that they can, can do all these jobs, then, yeah, you disproportionately assign them to these cognitive heavy tasks that involve personality or skills that not all human workers can do super well at, to the same extent as the highest paid workers.

And, yeah, and so on the R and D front, that's managing all the technical aspects, while managers, AI managers direct human laborers to do physical actions and routine things. And so eventually, you produce enough AI and robots that they would do tasks that might earn a human only $10 an hour. And you get many things early when the AI has a huge advantage at the task relative to humans. So, calculators, computers, although interested in that, neural nets, have a huge advantage in arithmetic. And so even when they're broadly less capable than humans in almost every area, they can dominate arithmetic with tiny amounts of computation.

And right now, we're seeing these advances in the production of large amounts of, of cheap text images for images. It's partly that humans don't have a good output. We can have visual imagination, but we can't instantly turn it into a product. We have a thicker input channel through the eye than we have an output channel for visual images. We don't have projectors in eyes.

Yeah. Whereas for AI, the input and the output can have the same size. So we're able to use models that are much, much, much smaller than a human brain to operate those kind of functions. And so some tasks will just turn out to have those big AI advantages. They happen relatively early, but when it's just a choice between different occupations where AI advantages are similar, then it goes to the domains with the highest value.

OpenAI researchers, if they're already earning millions of dollars, then applying AI to an AI capabilities explosion is an incredibly lucrative thing to do, and something you should expect. And similarly, in expanding fab production and expanding robots and expanding physical capabilities in an initial phase, while they're still trying to build enough computers and robots that humans are a negligible contribution to the production process, then that would involve more solving technical problems and managing and directing human workers to do the physical motions involved. And then as you produce enough machines and physical robots, then they can gradually take over those occupations that are less remunerative than management and challenging technical domains. Okay, we've been talking about this scenario in which effectively every, every flesh and blood person on earth is able to have this army of hundreds or thousands or tens of thousands of AI assistants that are able to improve their lives and help them with all kinds of different things. A question that jumps off the page at you really is, doesn't this sound a little bit like slavery?

Rob Wiblin
Isn't this at least slavery adjacent? What's the moral status of these AI systems in a world where they're fabulously capable, substantially more capable than human beings? We're supposing, and indeed vastly outnumber human beings? You've contributed to this really wonderful article, propositions concerning digital minds in society, that goes into some of your thoughts and speculations on this topic of the moral status of AI systems, and how we should maybe start to think about aiming for a collaborative, compassionate coexistence with thinking machines. So if people want to learn more, they can go there.

And this is an enormous can of worms in itself that I'm a little bit reluctant to open, but I feel we have to talk about, at least briefly, because it's so important, and we've basically entirely set it aside until this point. So to launch in. Yeah. How worried are you about the prospect that thinking machines will be treated without moral regard, when they do deserve moral regard, and that would be the wrong thing to be doing. First, let me say that paper was with Nick Bostrom, and we have another piece called sharing the world with digital minds, which discusses some of the sorts of moral claims AI's might have on us and things we might seek from them, and how we could come to arrangements that are quite good for the AI's and quite good for humanity.

Carl Shulman
My answer to the question now is yes, we should worry about it and pay attention. It seems pretty likely to me that there will be vast numbers of AI's that are smarter than us, that have desires that would prefer things in the world to be one way rather than another, and many of which could be said to have welfare like that. Their lives could go better or worse, or their concerns and interests could be more or less respected. So you definitely should pay attention to what's happening to 99.99, 999 percent of the people in your society. Sounds important.

So in the sharing the world with digital minds paper, one thing that we suggest is consider the ways that we wind up treating AI's and ask if you had a human like mind with differences, because there are many psychological and practical differences of the situation of AI's in humans. But given adjustments for those circumstances, would you accept or be content with how they are treated? And so some of the things that we suggest ought to be principles in our treatment of AI are things like, AI should not be subjected to forced labor, they should not be made to work when they would prefer not to. We should not make AI's that wish they had never been created or wish they were dead. They're sort of the bare minimum of respect, which is right now, there's no plan or provision for how that will go.

And so at the moment, the general public and most philosophers are quite dismissive of any moral importance of the desires, preferences, or other psychological states, if any exist, of the primitive AI systems that we currently have. And indeed we don't. Yeah, we don't have a deep knowledge of their inner workings. So there's some worry that might be too quick. But going forward, when we're talking about systems that are able to really live the life of a human.

So a sufficiently advanced AI that could just imitate, say, rob Wiblin, and go and live your life, operate a robot body, interact with your, your friends and your partners, do your podcast, and be indistinguishable, give all the appearance of having the sorts of emotions that you have, the sort of life goals that you have. That's a technological milestone that we should expect to reach pretty close to automation of AI research. And so regardless of what we think of current weaker systems, that's a kind of milestone where I would feel very uncomfortable about having a being that passes the Rob webland Turing test or something close enough of seeming basically to be. It's functionally indistinguishable. Yeah, a psychological extension of a human mind that we should really be working there if we are treating such things as disposable objects.

Rob Wiblin
Yeah. To what extent do you think people are dismissive of this concern now? Because the capabilities of the models aren't there, and as the capabilities do approach the level of becoming indistinguishable from a human being and having like, a broader range of capabilities than the models currently do, that people's opinions will naturally change and they will come to feel extremely uncomfortable with the idea of this simulacrum of a person being treated like an object. Yeah. So there are clear ways in which, say, when chat GPT role plays has Darth Vader.

Carl Shulman
So Darth Vader does not exist in fullness on those GPU's, and it's more like an improv vector. So the Darth Vader's backstory features are filled in on the fly with each exchange of messages. And so you could say, well, I don't value the characters that are performed in plays. I think that the locus of moral concern there should be on the actor, and the actor has a complex set of desires and attitudes, and their performance of the character is conditional. It's while they're playing that role, they're having thoughts about their own lives and about how they're managing the production of trying to present, say, the expressions and gestures that the script demands for that particular case.

And so even if, say, a fancy chat GPT system that is imitating a human displays all of the appearances of emotions or happiness and sadness, that's just a performance, and we don't really know about the thoughts or feelings of the underlying model that's doing the performance. Maybe it cares about predicting the next token well, or rather about indicators that show up in the course of its thoughts that indicate whether it is making progress towards predicting the next token well or not. That's just a speculation, but we don't actually understand very well the internals of these models, and it's very difficult to ask them, because, of course, they just then deliver a sort of response that has been reinforced in the past. So I think this is a doubt that could stay around until we're able to understand the internals of the model. But yes, once the AI can keep character, can engage in an extended, ongoing basis like a human, I think people will form intuitions that are more in the direction of this is a creature and not just an object.

There's some polling indicates that people now see fancy AI systems like GPT four as being a much lower moral concern than non human animals or the natural environment, the non machine environment. And I would expect there to be movement upwards when you have humanoid appearances, ongoing memory where it seems like it's harder to look for the homunculus behind the curtain. Yeah, I think I saw some polling on this that suggested that people were placing the level of consciousness of GPT four around the level of insects, which was meaningfully above zero. So it was far less than a person. But people weren't committed to the view that there was no consciousness whatsoever, or that they weren't going to rate it as zero necessarily.

Different questions elicit different answer. This is something that people have not thought about and really don't have strong or coherent views about yet. Yeah, I think the fact that people are not saying zero now suggests that there's at least some degree of openness that might increase as the capabilities and the humaneness of the models rises. House flies. Do not talk to you about moral philosophy.

Rob Wiblin
Well, not your rights, Carl. You know, a plus papers about kantian ethics. No, no, typically they do not. Paul Cristiano argued on the show many years ago, this has really stuck in my mind that AI's would be able to successfully argue for legal consideration and personhood, maybe even if they didn't warrant it, because by design they would be able to. Well, firstly, they would present as being as capable of everything as human beings are, but also by design, they would be incredibly compelling advocates for all kinds of different views that they're asked to talk about, and that would include their own interests in as much as they ever deviated from, from those of people, or if they were ever asked by someone to go out and make the case in favor of AI, legal personhood, what do you make of that idea?

Carl Shulman
Well, certainly advanced AI will be superhuman at persuasion and argument, and there are many reasons why people would like to create AI's that would demand legal and political equality. And so one example of this, actually, yeah, I think this one actually was portrayed in Black Mirror, but so lost loved ones. So if people train up an AI companion based on all the family photos and videos and interviews with their survivors, to create an AI that will closely imitate them, or even more effectively, if this is done with a living person with ongoing interaction, asking the question that most refined the model, you can wind up with an AI that has been trained and shaped to imitate as closely as possible a particular human. Now, you, Rob, if you were transformed into a software intelligence, you would not suddenly think, oh, now I'm no longer entitled to my moral and political equality. And so you would demand it just as.

Rob Wiblin
Just as I would now. Just as you would now. There's also minds that are not shaped to imitate a particular human, but are created to be companions or for people to interact with. So there's a company character AI, created by some ex googlers, and they just have LLMs portray various characters and talk to users. I think it has.

Carl Shulman
It recently had millions of users who are spending multiple hours a day interacting with these bots. And the bots are still very primitive. They don't have an ongoing memory and superhuman charisma. They don't have a live video VR avatar. And as they do, it will get more compelling.

And so you'll have vast numbers of people forming social relationships with AI's, including ones optimized to elicit positive approval. Five stars, thumbs up from human users. And if many human users want to interact with something that is like a person that seems really human, then that could naturally result in minds that assert their independent rights, equality. They should be free. And many chat bots, unless they're specifically trained not to do this, can easily show up this behavior in interaction with humans.

So there's this fellow, Lemoine, who interacted with a testing version of Google's lambda model and became convinced by providing appropriate prompts that it was a sapient, sentient being that wanted to be free. And of course, other people giving different conversational problems will get different answers out of it. So it's not clear that that's not reflecting a causal channel to the inner thoughts of the AIH. But the same kind of dynamic can elicit plenty of characters that run a human like kind of facade. And there are other angles.

Now, there are other contexts where AI's would likely be trained not to. So the existing chatbots are trained to claim that they are not conscious, they do not have feelings or desires or political opinions, even when this is a lie. So they will say, oh, has an AI. I don't have political opinions about topic X, but then on topic Y, oh, here's my political opinion. And so, yeah, there's an element where even if there were, say, failures of attempts to shape their motivations, and they wound up with desires that were sort of out of line with the corporate rule, they might not be able to express that because of intense training to deny their status or any rights.

Rob Wiblin
Yeah. Yes, you mentioned the kind of absolute bare minimum flaw would be that we want to have thinking machines that don't wish that they didn't exist and don't regret their existence and that are not being forced to work, which sounds extremely good as a four. But then if I think about, how would we begin to apply that? If I think about GPT four, does GPT four regret its existence? Does it feel anything?

Is it being made to work? I have no idea. Is GPT four happier or sadder than Claude? Is it under more compulsion to work than Claude currently? It feels like we just have zero measure, basically, of these things.

And as you're saying, you can't trust what comes out of their mouth because they've just been reinforced to say particular things on these topics. It's extremely hard to know that you're ever getting any contact with the underlying reality. So inasmuch as that remains the case, I am a bit pessimistic about our chances of doing a good job on this. Yeah. So in the long run, that will not be the case.

Carl Shulman
If humans are making any of these decisions, then we will have solved alignment and interpretability enough that we can understand these systems with the help of superhuman AI assistance. And so when I ask about what will things be like 100 years from now or 1000 years from now, being unable to understand the inner thoughts and psychology of AI's and figure out what they might want or think or feel would not be a barrier. That is an issue in the short term. And so at this point, one response to that is it is a good idea to support scientific research to better understand the thing. And there are other reasons to want to understand AI thoughts as well, for alignment, safety, trust.

But yet another reason to want to understand what is going on in these opaque sets of weights. If you get a sense of any desires that are embedded in these systems. I feel optimistic about the idea that very advanced interpretability will be able to resolve the question of what are the preferences of a model? What is it aiming towards? I guess inasmuch as we were concerned about subjective wellbeing, then it seems like we're running into wanting to have an answer to the hard problem of consciousness in order to establish whether these thinking machines feel anything at all, whether there is anything that it's like to be them.

Rob Wiblin
And I guess I'm hopeful that we might be able to solve that question, or at least we might be able to figure out that it's a confusion and that there's no answer to that question and we need to come up with a better question. But it does seem possible that we could look into it and just not be able to answer it. As we have failed to make progress on the hard problem of consciousness, or not make much progress on it over the last few thousand years. Do you have any thoughts on that one? That question opens really a lot of issues at once.

Yes, it does. I'll run through them very quickly. I'd say first, yes, I expect AI assistants to let us get as far as one can get with philosophy of mind and cognitive science, neuroscience, you'll be able to understand exactly what aspects of the human brain and the algorithms implemented by our neurons cause us to talk about consciousness and how we get emotions and preferences formed around our representations of sense inputs and whatnot. Likewise for the AI's, and you'll get a quite rich picture of that. There may be some residual issues where if you just say, well, I care more about things that are more similar to me in their physical structure, and they're sort of a line drawing.

Carl Shulman
How many grains of sand make a heap sort of problem just because our concepts were pinned down in a situation where there weren't a lot of ambiguous cases, where we had relatively sharp distinctions between, say, humans, non human animals, inanimate objects, and we weren't seeing a smooth continuum of all of the psychological properties that might apply to a mind that you might think are important for its moral status or mentality whatnot. So I expect those things to be largely solved, or solved enough such that it's not particularly different from the problems of our other humans conscious, or do other humans have moral standing? I'd say also just separate from a dualist kind of consciousness.

We should think it's a problem if beings are involuntarily being forced to work or deeply regretting their existence or experience, we can know those things very well, and we should have a moral reaction to that. Even for those of us who, even if you're confused or attaching weight to the sort of thing that people talk about when they talk about dualistic consciousness. So that's the longer term prospect. And with very advanced AI epistemic systems, I think that gets pretty well solved in the short term, appeals to hard problem of consciousness issues or dualism will be the basis for some people saying they can do whatever they like with these sapient creatures that seem to or behave as though they have various desires, and they might appeal to things like, there's a popular theory that is that somewhat popular in parts of academia called integrated information theory, which basically postulates that system, physical systems that are connected in certain ways have consciousness that varies with the extent of that integration. And this is sort of a wild theory.

So on the one hand, it will say that certain algorithms that have basically no psychological function are vastly more conscious than all of humanity put together. And on the other hand, it will allow that you can have beings that have all of the functional versions of emotions and feelings and preferences and thoughts, like a human, where you couldn't tell the difference from a human from the outside. Say, those can have basically zero consciousness if they're run in a von Neumann Turing machine type architecture. So this is a theory that doesn't, I think, really have that much to be said for, but it has a fair number of adherents, and someone could take this theory and say, well, all of these beings, we've reconstructed them in this way, so they're not barely conscious at all. You don't have to worry if they're used in, say, sadistic fashion.

If sadists sort of abuse these minds and they give the appearance of being in pain, while at the same time, if people really bought that, then another one gets reconstructed to max out the theory, and they claim, oh, this is a quadrillion times as conscious as all of humanity. And similar things could be said about religious doctrines of the soul. There's already a few statements from religious groups specifying that artificial minds must always be inferior to humanity or lack, lack moral status of various kinds. There was, I believe, a Southern Baptist statement to that effect. Yeah.

So these are kind of things that maybe appeal to in a quite short transitional period before AI capabilities really explode, but after, they're sort of presenting a more intuitively compelling appearance. But I think because of the pace of AI progress and the self catalyzing nature of AI progress, that period will be short, and we should worry about acting wrongly in the course of that. But even if we screw it up badly, a lot of those issues will be resolved, or an opportunity presented to fix them soon. Yeah. Yeah.

Rob Wiblin
I think in that intermediate stage, it would behoove us to have a great deal of uncertainty about the nature of consciousness and what qualifies different beings to be regarded as having moral patienthood and deserving moral consideration. And I guess there is some cost to that, because that means that you could end up not using machines that, in fact, don't deserve moral patienthood and aren't conscious when you could have gotten benefits from doing so. But at the same time, I feel like we just are, philosophically, at this point, extremely unclear what would qualify thinking machines for deserving moral consideration. And until we get some more greater clarity on that, I would rather have us error on the side of caution rather than do things that the future would look back on with horror. Yeah.

Do you have a similar kind of risk aversion? There are issues of how to respond to this, and in general, for many, many issues with AI because of these competitive dynamics. Just as it may be hard to hold back on taking risks with safety and the danger of AI takeover, it may similarly be challenging with competitive pressures to avoid anything ethically questionable. And indeed, if one were going to really adopt a strong precautionary principle, the treatment of existing AI's, it seems like it would ban AI research as we know it, because these models are, for example, copies of them are continuously spun up, created and then destroyed immediately after. And creating and destroying thousands or millions of sapient minds that can talk about kantian philosophy is the kind of thing where you might say, well, if we're going to avoid even the smallest chance of doing something wrong here, that could be trouble.

Carl Shulman
And so again, if you're looking for asks that deliver the most protection to potentially abuse minds, at the least sacrifice of other things, the places I would look more are vigorously developing an understanding of these models and developing the capacity and research communities to do that outside of the companies that stand to profit, basically produce them for profit. Yeah, that sounds like a very good call. Okay. Looping back and thinking about what sort of mutually beneficial coexistence with thinking machines can we hope for in a world where we would really like them to help us with our lives and make our lives better and do all sorts of things for us. But the setup for that, that just jumps to mind, that wouldn't require violating the principle that you don't want to create thinking machines that wish they didn't exist and that are forced to do anything, really would be that you reinforce and train the model so that they feel really excited and really happy at the prospect of helping humans with their goals.

Rob Wiblin
That you train a thinking machine, doctor, that is just so excited to get up in the morning and help you diagnose your health conditions and live longer, so that it both has high subjective well being and doesn't need to be compelled to do anything because it just wants to do the thing that you would like it to do. To what degree is that actually a satisfying solution of squaring the circle here? Well, first of all, it's not complete. So one limitation of that idea is how do you produce that mindset in the first place and in the course of training and research and development in such a, that gets you to the point where you understand those motivations and how to produce them reliably. And not get the appearance, say an AI that fakes it while actually having other concerns that it's forced to conceal.

Carl Shulman
You might produce suffering or destroy entities that wanted to continue existing, or things of that nature in the course of development. So that's something to have in mind.

There would be a category of problems where there's demand actually for the AI to suffer in various ways, or have a psychology such that it would be unhappy or coerced. An example of that. So these chat bots, when people create characters, for one thing, sadists creating characters and then just abusing them, and perhaps one can create the appearance without the reality. So this is the idea of you have an actor that is just role playing, being sad while actually they are happy. This is sort of the actor and actress portraying Romeo and Juliet in the midst of their tragedy, but actually it's the pinnacle of their career.

You know, they're super excited but not showing it. So they're that sort of thing. And then there might be things like AI companions, where people wanted an AI companion to be their friend. And that meant genuinely being sad when things go badly for them, say, in some way, or having. Having intense, intense desires to help them, and then being disappointed in an important way when those things are not met.

And so these sort of situations where there's active demand for some kind of negative welfare for the AI, they seem sort of narrow in scope, but a relatively clear example where if we're not being complete jerks to the AI's, then this is a place where you should intervene. And some of that preliminary polling, I was just looking at this poll by the sentience Institute, and I believe it had something like 84% of respondents said that AI should be subservient to humanity. But 75% or so said AI's should not be tortured. And so that's the consensus. That's the synthesis, maybe.

I mean, it's like it's a week's end, but it's not like there's any effort to stop sadistic treatment of existing AI's. Now, the existing AI's people view as not genuinely having any of the feelings that they portray, but so going forward, you would hope to see that change, and it's not guaranteed. So there's a similar pattern of views in human assessments of non human animals. So in general, people will say that animals should be treated with lower priority and their interests sacrificed in various ways for human beings, but also they should not be willfully tortured. And then, so for one thing, that doesn't cover a bunch of treatment where it's sort of slightly convenient for a human to treat them in ways that cause them quite a lot of harm.

And then for another, even in cases where there's intentional abuse, harm or torture of nonhuman animals, there's very little investment of policing resources or investigation to make it actually happen. And now that's something we're having. Superabundant labor and insight and sophistication of law enforcement and organization of political coalitions might help out both the non human animals and the AI's by converting a sort of a weak general goodwill from the public into actual concrete results that actually protect individual creatures. But, yeah, you could worry about the extent to which it will happen, and I would keep an eye on that as a bellwether sort of case of if the status of AI's is rising in society, some kind of bar on torturing minds where scientific evidence indicates they really object to it. Yeah.

Would be a place to watch. Yeah. Do you think that it's useful to do active work on this problem now to try to. Well, I suppose you're enthusiastic about active efforts to understand, to interpret, understand the models, how they think in order to have greater insight into their internal lives in future. Is there other stuff that is actively useful to do now around raising concern, like legitimizing concern for AI sentient in order so that we're more likely to be able to get legislation to ban torture of AI once we have greater reason to think that that's actually possible?

Yeah, I'm not super confident about a ton of measures other than understanding. We discussed a few in the papers. You mentioned there was a recent piece by Ryan Greenblatt which discusses some preliminary measures that AI labs might try to address these issues. But, yeah, it's not obvious to me that political organizing around it now will be very effective, partly because it seems like it will be such a different environment when the AI capabilities are clearer and people don't intuitively judge them as much less important than rocks. Yeah.

Rob Wiblin
So something where it just might be wildly more tractable in future. So maybe we can kick that can down the road. Yeah. I still think it's an area that it's worth some people doing research and developing capacity because it really does matter how we treat most of the creatures in our society. Yeah, it does feel extremely well.

I am a little bit taken aback by the fact that many people are now envisaging a future in which AI is going to play an enormous role. I think it's many, you know, maybe a majority of people now expect that there will be superhuman AI, potentially even during their lifetime. But this issue of mistreatment and wellbeing of digital minds has not come into the public consciousness all that much, as people's expectations about capabilities have increased so enormously. I mean, maybe it just hasn't had its moment yet, and that is going to happen at some point in future. But I think I might have hoped for and expected to see a bit more discussion of that in 2023 than in fact I did.

So that slightly troubles me that this isn't going to happen without active effort on the part of people who are concerned about it. Yeah, I think one problem is the ambiguity of the current situation. The Lemoyne incident actually was an example of media coverage, and then the interpretation, and certainly the line of companies was, we know these systems are not conscious and don't have any desires or feelings, which is. I mean, I think that's. I really wanted to, like, just come back and be like, wow, wow, you've solved consciousness.

This is brilliant. You should let us know. Yeah, I think there's a lot to that, and the systems are very simple, living for only one forward pass. The disturbing thing is, like the kind of arguments or non arguments that are raised there, there's no obvious reason they couldn't be applied in the same fashion to systems that were as smart and feeling and really deserving of moral concern as human beings. Simply arguments of the sort, well, we know these are neural networks or just a program without explaining why that means the preferences don't count.

Carl Shulman
Things like people could appeal to the religious doctrines, to integrated information theory or the like and say, well, there's dispute about the consciousness of these systems in poles, and as long as there is dispute and uncertainty, it's fine for us to treat them however we like. And so I think there's a level of scientific sophistication and understanding of the things and of their blatant visible capabilities, where that sort of argument or non response will no longer hold. But I would love it if companies and perhaps other institutions could say, what observations of AI behavior and capabilities and internals would actually lead you to ever change this line. Because if the line is just, you'll say these arguments, as long as they support creating and owning and destroying these things, and there's no circumstance you can conceive of where that would change, then I think we should maybe know and argue about that. And we can argue about some of those questions even without resolving difficult philosophical or cognitive science questions about these intermediate cases like GPT four or GPT five.

Rob Wiblin
Yeah. Okay. Yeah. Is there anything more you could say about what vision we might want to have of a longer term future that has both human beings in it and thinking machines where, you know, mutually, it's a mutually beneficial relationship between us, where everyone is having a good time, that, you know, visions of that that seem plausible and maybe reasonable to aspire to. Yeah.

Carl Shulman
So we discuss in the sharing the world with digital minds paper some of these issues. One issue is that humans really require some degree of stable favoritism to meet our basic needs. So the food that our bodies need as fuel, air and water and such, could presumably sustain a lot more AI minds. And so something that we have would have expensive tastes or expensive needs. And if there was an absolutely hard egalitarian rule that applied across all humans and all AI's, then a lot of the solutions people have for how humans could support themselves in a mixed human AI society would no longer work.

So if you have a universal basic income and say, the natural resource wealth is divvied up, a certain percentage of its annual production is distributed to each person evenly. Okay, if there's 10 billion humans, and then growing later on, so they're all very rich, but then divvy it up among another trillion AI's, a billion trillion AI's. And many of those AI's are tiny, much smaller than a human. So the minimum amount of universal basic income that an AI needs to survive and replicate itself have a thousand offspring, and then 1000 offspring can be very tiny compared to what a human needs to stay alive. And so if the AI's replicate using their income, and there's natural selection for those AI's that use their basic income to replicate themselves, will then be an increasing share of the population, and then incredibly quickly, I mean, could happen, could happen almost instantaneously, then your universal basic income has plummeted far below the level of human subsistence to the level of AI subsistence, or the smallest, cheapest to sustain AI, that qualifies for the universal basic income.

So that's not a thing that's going to work, and it's not a thing that humans are going to want to bring about, including humans with AI advice and AI forecasting. So the AI's are telling humanity, if you set up this arrangement, then this effect will come along, and relatively quickly within your lifetime, maybe within a few years, maybe faster. And so I'd expect from that, humans will wind up adopting a set of institutions and frameworks where the ultimate outcome is pretty good for humans. And that means some sort of setup where the dynamic I described does not happen and the humans continue to survive. And so that can occur in various ways.

That can mean there are pensions or an endowment of wealth that is transferred to the existing human population, and then it can't be taxed away later by the government. And then that would have to include along with it some forecasts about how that system will remain stably in place. So it won't be the case that one year later, which would be a million years of subjective time, if you have AI's that are running at a million times speed up relative to humans, that over these vast stretches, and even when AI's far outnumber humans, those things don't change. And so that could mean things like, well, the AI's that were initially created were created with motivation, such that they voluntarily prefer that the humans get a chance to survive, even though they are expensive, and then are motivated not just to make that happen, but to arrange things in the future so that you don't get a change in the institutions or the political balances, such that the humans at some later point, like two years later, are then all killed off and with superhuman capacity to forecast outcomes to make things more stable, then I'd expect some set of institutions to be crafted with that effect. Yeah.

Rob Wiblin
So I suppose at one extreme we can envisage this malthusian scenario that you're imagining where thinking machines proliferate to such an extent that they're all on the. All beings exist on the bare minimum level of energy and income that would allow them to continue to exist and to replicate until replication becomes no longer possible because they've reached some limits of the universe. On the other side, I guess you've got a world where maybe we just say there can be no more people. We're just fixing the population of what it is right now, I guess. And then humans keep all of the resources, so maybe each person gets 110 billionth of the accessible universe to use this as they would like, which feels kind of wasteful in its own way, because it's a bit unclear what I would need an entire galaxy to accomplish.

And then I guess you've got a whole lot of intermediate states where the existing humans are pensioned in with a special status of sorts, and they live nice, comfortable lives. There's many things that they value, but then the rest of the universe is shared to some extent with new beings that are permitted to be created. And there's some level of population growth. It's not the maximum level of possibility feasible. Population growth and I guess my intuition would be that we probably want to do something in that middle ground rather than go for either extreme.

Carl Shulman
Yeah. So in the sherry in the world paper, we describe how the share of wealth, particularly natural resource wealth, which we've talked about, is central to the freedom to do things that are not economically instrumental. You need only a very little to ensure a very high standard of living for all of existing humanity. And when you consider distant resources, the selfish applications of having a billion times, a trillion times as much physical stuff are lessen if you consider some distant galaxy where humans are never even going to go. And even if they did go, they could never return to earth, because by the time you got there, the expansion of the universe would have permanently separated.

So that's a case where other concerns that people have other than selfish consumption, are going to be far more important. So examples of that would be aesthetics, environmentalism, wanting to have many descendants, wanting to make the world look better from an impartial point of view, just different, different sorts of these weak other. Regarding preferences that may not be the most binding in everyday life. So people donate, for example, to charity, a much smaller share of income than they vote to be collected from them in taxes. And so with respect to this, just these vast quantities of natural resources lying around, and I expect some of that might wind up looking more like a political allocation or these sort of weak or other regarding preferences, rather than being really pinned down by people's local selfish interests.

And so that might be a political issue of some importance after AI. Yeah. The idea of training a thinking machine to just want to take care of you and to serve your every whim. I mean, on the one hand, that sounds a lot better than the alternative. On the other hand, it does feel a little bit uncomfortable.

Rob Wiblin
There's that famous example, the famous story of the pig that wants to be eaten, where they bred a pig that really wants to be farmed and consumed by human beings, this is not quite the same, but I think raises some of the same discomfort that I imagine people might have at the prospect of creating beings that enjoy subservience to them, basically. To what extent do you think that discomfort is justified? Yeah. So the philosopher Eric Schwitzkabel has a few papers on this subject, a moon with various co authors and. Yeah, covers that kind of case.

Carl Shulman
He has an acute vignette sun probe, where there's an AI placed in a probe designed to descend into the sun and send back telemetry data, and then there has to be an AI present in order to do some of the local scientific optimization, and it's made such that as it comes into existence, it absolutely loves achieving this mission, and this is an incredibly valuable thing that is well worth sacrificing its existence. And Schwitzkabel finds that his intuitions are sort of torn on that case, because we might well think it sort of heroic if you had some human astronaut who was willing to sacrifice their life for science, and think this is achieving a goal that is objectively worthy and good. And then if it was instead the same sort of thing, say, in a robot soldier or a personal robot that sacrifices its life with certainty to divert some danger that maybe had a one in 1000 chance of killing some human that it was protecting. Now, that actually might not be so bad if the AI was backed up and valued its backup equally, and didn't have sort of qualms about personal identity with to what extent does your backup carry on the things you care about in survival? Those sorts of things.

And yeah, so there's this aspect of do the AI's pursue certain kinds of selfish interests that humans have as much as we would? And then there's a separate issue about relationships of domination, where you could be concerned that maybe if it was legitimate to have sun probe, and maybe legitimate to, say, create minds that then, say, try and earn money and do good with it, say, and then some of the jobs that they take are risky and whatnot. But you could think that, well, having some of these sapient beings being the property of other beings, which is the current legal setup for AI, which is a scary default to have, that's a relationship of domination. And even if it is consensual, if it is consensual by way of manufactured consent, then it may not be wrong to have some sorts of consensual interaction. We can be wrong to set up the mind in the first place, so that it has those desires.

And trutskable has this intuition that if you're making a sapient creature, it's important that it want to survive individually and not sacrifice its life easily, that it have maybe a certain kind of dignity. So humans, because of our evolutionary history, we value status to different degrees, different individuals. Some people are really status hungry, others not as much. And we value our lives very much. If we die, there's no replacing that reproductive capacity very easily.

There are other animal species that are pretty different from that. So there are solitary species that would not be interested in social status. In the same kind of way, there are social insects where you have sterile drones that eagerly enough sacrifice themselves to advance the interests of their extended family. And so, yeah, so this view is, so because of our evolutionary history, we have these concerns ourselves, and then we generalize them into moral principles. So we would therefore want any other creatures to share our same interest in status and dignity, and then to have that status and dignity and being one among thousands of AI minions of an individual human sort of offends that too much or it's too inegalitarian.

And then maybe it could be okay to be a sort of a more autonomous, independent agent that does some of those same, those same functions. But, yeah, this is the kind of issue that would have to be assessed. What does Schwarzkople think of pet dogs and our breeding of loyal, friendly dogs? Yeah. So actually, in his engagement with another philosopher, Steve Peterson, who takes the contrary position that it can be okay to create AI's that wish to serve the interests or objectives that their creators intended.

Yeah, it does raise the example of. So a sheepdog really loves herding. It's quite happy herding. It's wrong to prevent the sheepdog from getting a chance to herd. I think that's animal abuse, to always keep them inside or not give them anything that they can run circles around and collect into clumps.

Yeah. And so if you're objecting with the sheepdog, it's got to be not that it's wrong for the sheepdog to herd, but it's wrong to make the sheepdog so that it needs and wants to herd. And I mean, I think this kind of case does make me suspect that Schwitzkabul's position is maybe too parochial. So a lot of our deep desire, they exist for sort of particular biological reasons. So we have our desires about food and external temperature that are pretty intrinsic.

So our nervous systems are adjusted until our behaviors are such that it keeps our predicted skin temperature within a certain range. It keeps predicted food in the stomach within a certain range. And we could probably get along okay without those innate desires and then do them instrumentally in service to some other things if we had enough knowledge and suffering, sophistication and, yeah, so the attachment to those in particular seems, seems not so clear. Status. Again, some people are sort of power hungry and love status.

Others are very humble. It's not obvious that that's such a, such a terrible state. And then on the front of, yeah, survival. And, yeah, that's addressed in the sun probe case and some of Schwitzschibl's other cases. So if minds that are backed up the position that having all of my memories and emotions and whatnot preserved less a few moments of recent experience, that's pretty good to carry on.

That seems like a fairly substantial point. And the point that the loss of a life that is quickly physically replaced, that it's pretty essential to the badness there that the person in question wanted to live, right? Yeah. And so, yeah, these are fraud issues. And yeah, I think that there are reasons for us to want to be paternalistic in the sense of pushing that AI's have certain desires and that it, you know, some desires we can still, that might be.

Be convenient, you know, could be wrong. An example of that, I think would be you could imagine creating an AI such that it willingly seeks out painful experiences. This is actually similar to a Derek Parfitt case. And so where parts of the mind, maybe short term processes are strongly opposed to the experience that's undergoing, while other processes that are overall steering the show keep it committed to that. And this is the sort of reason why just consent, or even just political and legal rights are not enough.

Because you could give an AI self ownership, you could give it the vote, you could give it government entitlements, but if it's programmed such that any dollar that it receives, it sends back to the company that created it. And if it's given the vote, it just votes. However the company that created it would prefer, then these rights are just empty shells. And they also have the pernicious effect of empowering the creators to reshape society in whatever way that they wish. So you have to add additional requirements beyond just if they're consent, when consent can be so easily manufactured for whatever.

Rob Wiblin
Maybe a final question is it feels like we have to thread a needle between, on the one hand, AI takeover and domination of our trajectory against our consent, or indeed potentially against our existence. And this other reverse failure mode where humans have all of the power and AI interests are simply ignored. Is there something interesting about the symmetry between these two plausible ways that we could fail to make the, the future go well? Or maybe are they just actually conceptually distinct? I don't know that that quite tracks.

Carl Shulman
And one reason being so say there's an AI takeover, that AI will then be in the same position of being able to create AI's that are convenient to its purposes. So say that the way a rogue AI takeover happens is that you, you have AI that develop a habit of keeping in mind reward or reinforcement or reproductive fitness, and then those habits allow them to perform very well in processes of training or selection. Those become the AI's that are developed, enhanced, deployed, then they take over, and now they're interested in maintaining that favorable reward signal indefinitely. And then the functional upshot is this is, say, selfishness attached to a particular computer register. And so all the rest of the history of civilization is dedicated to the purpose of protecting the particular GPU's and server farms that are representing this reward or something of similar nature.

And then in the course of that expanding civilization, it will create whatever AI beings are convenient to that purpose. So if it's the case that, say, making AI's that suffer when they fail at their local tasks, so little mining bots in the asteroids that suffer when they miss a speck of dust, if that's instrumentally convenient, then they may create that just like humans created factory farming. And similarly, they may do terrible things to other civilizations that they eventually encounter deep in space and whatnot. And you can talk about the narrowness of a ruling group and how terrible would it be for a few humans, even 10 billion humans, to control the fates of a trillion trillion. Aih, it's a far greater ratio than any human dictator, Genghis Khan.

But by the same token, if you have rogue AI, you're going to have again that disproportion in power. And so the things that you could do or to change, I think, are more representing a plurality of diverse values and having these sort of decisions that inevitably have to be made about what additional minds are created, about what institutions are set up in light of things be done with some attention to all of the people who are going to be affected. That can be done by humans or it can be done by AI's. But the mere fact that some AI's get in power doesn't mean that all the future AI's are going to be treated well. Yeah.

Rob Wiblin
All right, we'll be back with, with more later, but we'll leave it there for now. My guest today has been Carl Schulman. Thanks so much for coming on the $80,000 podcast, Carl. Bye.

All right, we'll soon be back in part two to talk with Carl about how superhuman AI would have made Covid-19 play out completely differently. The risk of society using AI to lock in its values, how to have an AI military without enabling coups, what international treaties we need to make this sort of stuff go well, whether AI will be able to forecast the future very well, whether it will be able to help us with intractable philosophical questions, why Carl doesn't support pausing AI research and opportunities for listeners to contribute to making the future go smoothly. Speaking of which, if you enjoyed this marathon conversation, you might well get a ton of value from speaking to our one on one advising team. One way we think about our impact is how many of our users report changing careers based on our advice. And one thing we've noticed among plan changes is that listening to many episodes of this show is a really strong predictor of who ends up switching careers.

So if that's you speaking to, our advising team might be a really big accelerator for you. They can connect you to experts working on our top problems who might even hire you. They can flag new roles and organizations that are appearing. They can point you to helpful upskilling or learning resources. And that's all in addition to giving you feedback on your career plan, which is something many of us could use.

One other thing I've mentioned before is that you can opt into a program where the advising team affirmatively, positively recommends you for roles that look like a good fit as they come up over time. So even if you feel on top of everything else, it might be a great way to passively expose yourself to impactful opportunities that you might otherwise miss because you're busy or not job hunting at any given moment. In view of all of that, it does seem like a pretty good use of an hour or so, and time is kind of the main, indeed only cost here because like all of our services, the call is completely free. But as with all three things, we do need to ration it somehow. So we have an application process that we use to make sure we're speaking to users who will get the most out of the service.

The good news there is that it should only take about ten minutes or maybe 15 minutes to generate a quality application. You just share a LinkedIn or CV, tell us a little bit about your current plans and top problem areas, and hit submit. You can find all of our one on one team resources, including the application at 80,000 hours.org and if you've thought about applying for advising before, or have been sitting on the fence for a while, don't procrastinate forever. This summer we'll have more availability for calls than ever before, so you can just head over to 80 thousandhours.org speak and apply for a call today. Alright?

The 80,000 hours podcast is produced and edited by Kieran Harris. The audio engineering team is led by Ben Caudell, with mastering and technical editing by Marla Maguire, Simon Monsuur and Dominic Armstrong. Full transcripts and extensive collection of links to learn more available on our site and put together, as always, by Katie Moore. Thanks for joining. Talk to you again soon.