Politics & the Future of Tech

Primary Topic

This episode focuses on the intersection of technology, policy, and politics, emphasizing the significant role that tech plays in governmental decisions and societal advancements.

Episode Summary

In "Politics & the Future of Tech," hosts Marc Andreessen and Ben Horowitz delve into how the tech industry, particularly startups, must actively engage with politics to shape policies favoring innovation and competition. They discuss the historical context of tech's involvement in politics, its recent intensification due to the industry's growing influence on all aspects of life, and the potential consequences of regulation on innovation. They argue that for America to maintain its global leadership and competitive edge, sound tech regulation is crucial. The episode emphasizes the diverging interests of "big tech" and startups, the strategic importance of having a voice in Washington, and the need for bipartisan cooperation to foster a favorable regulatory environment.

Main Takeaways

  1. Necessity of Political Engagement: The tech industry must engage more deeply in political processes to ensure regulations support innovation.
  2. Impact of Regulation: Over-regulation could stifle innovation, especially if it serves big tech interests over startups and public welfare.
  3. Strategic Involvement: Startups need strategic political involvement to advocate for regulations that encourage competition and innovation.
  4. Bipartisanship in Tech Policy: Effective tech policy requires bipartisan understanding and cooperation.
  5. Future of Tech and Politics: The evolving relationship between Silicon Valley and Washington is crucial for the future of tech and its role in society.

Episode Chapters

1: Introduction

Marc Andreessen and Ben Horowitz discuss the importance of the tech industry’s involvement in politics and policy. Marc Andreessen: "For more details, including a link to our investments, please see a16z.com disclosures."

2: The Historical Context

The hosts explain the historical minimal involvement of tech in politics and its shift due to technology's pervasive impact. Ben Horowitz: "Ignoring tech is no longer an option in the government."

3: The Current Tech-Political Landscape

Discussion on current hot topics in tech policy and the firm’s nonpartisan approach to advocacy. Ben Horowitz: "Big tech’s interests are very different than startup innovators' interests."

4: Call to Action

The need for a proactive approach in shaping tech policies to support innovation and competition is emphasized. Marc Andreessen: "We advocate for tech policy topics. We do not advocate for other partisan topics."

Actionable Advice

  1. Engage with Policymakers: Startups should actively engage with lawmakers to inform and influence tech-related policies.
  2. Understand Regulatory Impacts: Companies must understand how regulations can impact their operations and innovate within legal confines.
  3. Foster Industry Coalitions: Building coalitions with other firms can amplify the voice and influence of tech startups in political arenas.
  4. Monitor Policy Developments: Stay updated on policy developments to anticipate and react to changes that could impact the industry.
  5. Promote Transparency: Advocate for policies that promote transparency in government dealings with big tech to ensure fair competition.

About This Episode

“If America is going to be America in the next one hundred years, we have to get this right.” - Ben Horowitz
Welcome to “The Ben & Marc Show”, featuring a16z co-founders Ben Horowitz and Marc Andreessen. In this latest episode, Marc and Ben take on one of the most hot button issues facing technology today: tech regulation and policy.

In this one-on-one conversation, Ben and Marc delve into why the political interests of “Big Tech” conflict with a positive technological future, the necessity of decentralized AI, and how the future of American innovation is at its most critical point. They also answer YOUR questions from X (formerly Twitter). That and much more. Enjoy!

People

Marc Andreessen, Ben Horowitz

Content Warnings:

None

Transcript

Ben Horowitz

You know, ignoring tech is kind of no longer an option in the government. Big tech has been present in Washington, but Big Tech's interests are not only very different than startup innovation innovators interests, but we think, also divergent from America's interest as a whole. You know, if America is going to be America in the next hundred years, we have to get this right. The content here is for informational purposes only, should not be taken as legal, business, tax, or investment advice, or be used to evaluate any investment or security, and is not directed at any investor or potential investors in any a 16 Z fund. Please note that a 16 Z and its affiliates may maintain investments in the companies discussed in this podcast.

Marc Andreessen

For more details, including a link to our investments, please see a 16 z.com disclosures welcome back, everybody. We are very excited for this episode. We are going to be discussing a lot of hot topics. The theme of today's show is tech and policy and politics. And so there's just a tremendous amount of heat right now in the tech world about politics.

There's a tremendous amount of heat in the political world about tech. And then we as a firm, and actually, I, and both Ben and I as individuals have been spending a lot more time in policy and politics circles over the last several months, and we as a firm have a much bigger pusher than we used to, which, which Ben will describe in a moment. The big disclaimer that we want to provide upfront for this is that we are a nonpartisan firm. We are 100% focused on tech, politics and policy. We today, in this episode, are going to be describing a fair number of topics, some of which involve partisan politics.

Our goal is to describe anything that is partisan as accurately as possible and to try to be as fair minded in representing multiple points of view as we can be. We are going to try very hard to not take any sort of personal political, partisan position. Please, if you could grant us some generosity of interpretation, representation in what we say we are trying to describe and explain, as opposed to advocate for anything specifically partisan. We advocate for tech policy topics. We do not advocate for other partisan topics on that theme.

Ben, we wrote a little while ago, you wrote a blog post about and published about our firm's engagement in politics and policy. We sort of laid out our goals and then also how we're going about it. And we are actually quite transparent about this. And so I was hoping maybe as an introduction for people who haven't seen that, if you could walk through what our plan of strategy is and how. We think about this yeah, it kind of starts with why now?

Ben Horowitz

Why get involved in politics? Now? Historically, tech has been a little involved in politics, but it's been relatively obscure issues, h one b visas, stock option, accounting, carried interest, things like that. But now the issues are much more mainstream. And it turns out that for most of the software industry's life, Washington just hasn't been that interested in tech or in regulating tech for the most part.

But starting kind of in the mid two thousands as software ate the world and tech started to invade all aspects of life, ignoring tech is kind of no longer an option in the government, and that they've seen it impact elections and education and everything. And they, you know, I think policymakers really want to get in front of it is a term that we hear a lot. You know, we need to be in front of these things this time, not like last time when we were behind the curve. And so tech really needs a voice, and in particular, little tech needs a voice. So big tech has been present in Washington, but big tech's interests are not only very different than kind of startup innovation innovators interests, but we think also kind of divergent from America's interest as a whole.

And so that just makes it, like, quite imperative for us to be involved not only to represent the startup community, but also to kind of get to the right answer for the country and for the country. This is, we think, a mission critical effort. Because if you look at the last century of the world and you say, okay, why was America strong? And why was basically any country significant in terms of military power, economic power, cultural power in the last hundred years? And it was really those countries that got to the industrial revolution first and exploited it best.

And now at the dawn of the kind of information age revolution, we need to be there and not fall behind, not lose kind of our innovative edge. And that's all really up for grabs. And really the kind of biggest way America would lose it, because we're still like, from a capitalistic system standpoint, from an education standpoint, and so forth, from a talent standpoint, were extremely strong and should be a great innovator. But the thing that would stop that would be kind of bad or misguided regulation that forces innovation elsewhere out of the country and kind of prevents us, ourselves, America, and the american government from adopting these technologies as well, and kind of driving that, driving the things that would kind of make us bad on tech regulation. Our first really big tech whose goal is not to drive innovation or make America strong, but to preserve their monopoly.

We've seen that act out now in AI in a really spectacular way where big tech has pushed for the banning of open source for safety reasons. Safety reasons. Now, you can't find anybody who's been in the computer industry who can tell you that any open source project is less safe from a, first of all, from a hacking standpoint. And you talk about things like prompt injection and then new attacks and so forth, you would much more trust an open source solution for that kind of thing. But also for a lot of the concerns of the US government about copyrights, where does this technology come from and so forth.

Not only should the source code be open, but the data should probably also be open as well. So we know what these things were trained on, and that's also for figuring out what their biases and so forth. How can you know if it's a black box? So this idea that closed source would be safer. And big tech actually got this, some of this language into the Biden administration executive order, like, literally on, like, under the guise of safety to protect themselves against competition.

Competition is really, really scary. And so that's kind of a big driver. The other kind of related driver is, I think, this combination of big tech pushing for fake safety ism to preserve their monopoly, and then rather thin understanding of how the technologies work in the federal government. And so without somebody kind of bridging the education gap, they're very, very, you know, we are as a country, very vulnerable to these bad ideas. And we also think it's just a critical point in technology's history to get it right.

Because if you think about what's possible with AI, so many of our country's kind of biggest challenges are very solvable now. You know, things like education better and more equal healthcare, you know, just thinning out the bureaucracy that we've built and making the government easier to deal with, particularly for kind of underprivileged people trying to get into business and do things and become entrepreneurs, all these things are made much, much better by AI. Similarly, crypto is really our best answer for getting back to delivering the Internet back to the people and away from the large tech monopolies. It is the one technology that can really do that. And if we don't do that over the next five years, these monopolies are going to get much, much stronger.

Probably some of them will be stronger than the US government itself. And we have this technology that can help us get to this dream of stakeholder capitalism and participation for all economically. And we could undermine the whole thing with poor regulation. And then finally, in the area of biology, which is we're at an amazing point. And if you look at the kind of history of biology, we've never had a language just much like we never had a language to describe physics for 1000 years.

We didn't have a language to really model biology till now. The language for physics was calculus. The language for biology is AI. And so we have the opportunity to cure a whole host of things we could never touch before, as well as kind of address populations that we never even, like, did any testing on before and always put in danger. And this, again, you have big pharma whose interest is in preserving the existing system because it kind of locks out all the kind of innovative competition.

And so for all those reasons, we've like massively committed the flag and the firm to being involved in politics. So you've been spending a tremendous amount of time in Washington. I've been spending time in Washington. Many of our other partners, like Chris Dixon Bjpondi have been spending time in Washington. We have like real actual kind of lobbying capability within the firm.

When we'll talk about that some more, but call it government affairs. But they're registered lobbyists and they're working to kind of work with the government and set up the right meetings and help us get our message across. And then we're deploying a really significant amount of money to basically pushing innovation forward, getting to the right regulation on tech that preserves America's strength. And we are not only committed to doing that this year, but for the next decade. And so this is a big effort for us.

And we thought it'd be a good idea to talk about it on the podcast. Thank you. That was great. And then, yeah, the key point there at the end is where it's double underlining, I think, which is long term commitment. There have been times with tech specifically, where there have been people who have kind of cannonballed their way onto the political scene with large sort of money bombs.

Marc Andreessen

And then maybe they were just single issue or whatever, but they're in and out. They were just in and out. It was just like they thought they could have short term impact. Then two years later they're gone. We're thinking about that very differently.

Ben Horowitz

Yeah, that's why I brought up the historical lens. We really think that if America is going to be America in the next hundred years, we have to get this right. Good. Okay. We're going to unpack a lot of what you talked about and go into more detail about it.

Marc Andreessen

So I will get going on the questions which again, thank you, everybody, for submitting questions on x. We have a great lineup today. I'm going to combine a bunch of these questions because there were some themes. Jared asks, why has tech been so reluctant to engage in the political process, both at the local and national level until now? And then Kate asks, interestingly, the opposite question, which I find this juxtaposition very interesting because this gets to the nature of kind of how we've gotten to where we've gotten to.

Kate asks, tech leaders have spent hundreds of millions lobbying in DC. The opposite point, in your opinion, has it worked, and what should we be doing differently as an industry when it comes to working with DC? I wanted to juxtapose these two questions because I actually think they're both true. And the way that they're both true is that there is no single tech. Right.

And to your point, there is no single tech. And so, and maybe once upon a time there was, you know, and I would say, you know, my involvement in political, you know, kind of efforts and, you know, in this domain started, you know, 30 years ago. And so I've seen a lot of the evolution over the last three decades. And I was, you know, I was in the room for the founding of Technet, which is one of the sort of legacy John Chambers and John doar. So, you know, so I've kind of seen a lot of twists and turns on this over the last 30 years.

And I think, you know, the way I would describe it is, as Ben said, so one is, look, they're just, was there sort of a distinction on a real difference of view between big tech and little tech 2030 years ago? Yes, there was. It's much wider now. I would say that whole thing has really gapped out the big tech, even Big Ben, you probably remember big tech companies in the eighties and nineties often actually didn't really do much in politics. They didn't really have, probably most famously Microsoft.

Probably. Microsoft probably would. Everybody at Microsoft during that period would probably say they had underinvested, given what happened with the antitrust case that unfolded. Yeah, actually the one issue we were united on was the stock option accounting, which interestingly, and we were against Warren Buffett, and Warren Buffett was absolutely wrong on it and won. And it's actually very much strengthened tech monopolies.

Ben Horowitz

So I think did the opposite of what people certainly in Silicon Valley wanted, and I think people in Washington, DC and in America would have wanted was to make these monopolies so strong and using their market cap to further strengthen their monopoly because we moved from stock options to, it's too esoteric to get into here. But let's just say, trust me, it was bad. Yes, yes, it was very good for big companies, very bad for startups. Yeah. And actually, that's another thing that actually happened in the nineties and two thousands is so there's a fundamental characteristic of the tech industry, and in particular tech startups and tech founders, which Ben and I would include ourselves in that group, which is we are idiosyncratic, disagreeable, iconoclastic people.

Marc Andreessen

And so there is no tech startup association. Every industry group in the country has an association that has offices in DC and lobbyists and major financial firepower and you know these under names like the MPAA in the music industry, or the movie industry, and the RA in the record industry, and the National association of Broadcasters and the National Oil and Gas association and so forth. So every other industry has these groups that basically, where basically the industry participants come together and agree on a policy agenda. They hire lobbyists and they put a lot of money behind it. The tech industry, we've just never been good at actually, especially the startups, we've never been good at agreeing on a common platform.

In fact, Ben, you just mentioned the stock option accounting thing. That's my view of what happened@tech.net. Which is tech. Net, was an attempt to actually get the startup founders and the new dynamic tech companies together. But the problem was we all couldn't agree on anything other than basically there were two issues we could agree on stock option expensing as an issue, and we could agree on carried interest for venture capital firms as an issue, carried interest, tax treatment.

Basically what ended up happening was my view tech. Net early on got anchored on these, I would say pretty esoteric accounting and financial issues and just never had a view on, could not come to agreement on many other issues. And I think a lot of attempts to coordinate tech policy in the Valley have had that characteristic. Quite honestly, the other side of it, Ben, you highlighted this, but I want to really underline it is just like, look, the world has changed. And up until 2010, I would say we're up around until about 2010, I think you could argue that politics and tech were just never that relevant to each other.

For the most part, what tech companies did was they made tools. Those tools got sold to customers. They used them in different ways. How do you regulate database software or an operating system or a word processor. Or a router regulating a power drill or a hammer?

Ben Horowitz

Right? Yeah, exactly right. Exactly like, yeah, what are appropriate shovel regulations. And so it just wasn't that important. And then, look, this is where I think Silicon Valley deserves, kind of deserves its share of blame for anything, whatever's gone wrong, which is as a consequence, I think we all just never actually thought it was that important to really explain what we were doing and to be really engaged in the process out there.

Marc Andreessen

And then, look, the other thing that happened was there was a love affair for a long time. There was a view that. There was just a view that tech startups are purely good for society. Tech is purely good for society. There were really no political implications to tech.

And by the way, this actually continued interestingly, up through 2012. People now know of all the headlines that social media is destroying democracy and all these things that kind of really kicked into gear after 2015, 2016. But even 2012, the narrative, social media had become very important, actually in the 2012 election. But the narrative in the press was almost uniformly positive. It was very specifically that social media is protecting democracy by making sure that certain candidates get elected.

And then also, by the way, Obama, there were literally headlines from very newspapers and magazines today that are very anti tech, that were very pro tech at that point, because the view is tech helped Obama get reelected. And then the other thing was actually the Arab Spring. There was this moment where it was like, tech is not only going to protect democracy in the US, but it's going to protect democracy all over the world. And Facebook and Google were the catalysts, were, at the time, were viewed as the catalyst for the Arab Spring, which was going to, of course, bring a flowering of about democracy. The Middle east that has.

Ben Horowitz

It didn't work out that way, by. The way, did not work out that way. And so anyway, the point is it is relatively recent in the last 1012 years, that it's just sort of just like, everything has just kind of come together. And all of a sudden, people in the policy arena are very focused on tech. People in the tech world have very strong policy, politics, opinions.

Marc Andreessen

The media weighs in all the time. And by the way, none of this is a US only phenomenon. We'll talk about other countries later on. But there's also a global, you know, kind of thing. You know, these issues are playing out globally in many different ways.

I guess one thing I would add is, like, when I'm in, you know, I do a Fairmont in DC on the non political side, and when I'm in, you know, meetings involving national security or intelligence or, you know, civil policy of whatever kind, it's striking how many topics that you would not think are tech topics and end up being tech topics. And so, you know, and it's just because, like, when the state exercises power now, it does so through, you know, with technologically enabled means. And then when citizens basically resist the state or fight back against the state, they do so with technologically enabled means. And so there's sort of this, I was sometimes say we're the dog that caught the bus on this stuff, right? Which is we all wanted tech to be important in the world.

It turns out tech is important in the world. And then it turns out the things that are important in the world end up getting pulled into politics. Yeah. And I think that's right on the second part of the question. I think that's like, why is tech been so ineffective despite pouring all the money at it?

Ben Horowitz

And I think there are like a few kind of important issues around that. One is really arrogance in that I think we in tech, and a lot of the people who went in are like, oh, we're the good guys, we're for the good, and everybody will love us when we get there. And we can just push our agenda on kind of on the policymakers without really putting in the time and the work to, to understand the issues and the things that you face as somebody in Congress or somebody in the White House in trying to figure out what the right policy is. And I think that we are coming at that from kind of our cultural value, which is we take a long view of relationships. We try never to be transactional.

And I think that's especially important on policy because these things are massively complex. And so we understand our issues and our needs, but we have to take the time to understand the issues of the policymakers and make sure that we work with them to come up with a solution that is viable for everyone. And so I think that's thing one. I think text has been very bad on that. And the second one is, I think that they've been partisan where it's been like, not necessary or not even smart to be partisan.

So people have come in with whatever, like political bet mostly kind of Democrat democratic party that they have, and like, okay, we're going to go in without understanding you and only work with Democrats because we're Democrats and this kind of thing. And I think, you know, our approach is, look, we are here to represent tech. We want to work with policymakers on both sides of the aisle. We want to do what's best for America. We think that if we can describe that correctly, then we'll get support from both sides.

And that's just a really different approach. So hopefully that's right, and hopefully we can make progress. Okay, good. So let's go to the next question. So this is, again, a two part question.

Marc Andreessen

So Sheen asks, in what ways do you see the relationship between Silicon Valley and DC evolving in coming years, particularly in light of recent regulatory efforts targeting tech giants? And we'll talk about TikTok later on, but there's been obviously big flashpoint kind of events happening right now, by the way. Also for people haven't seen the DOJ just filed a massive antitrust lawsuit against Apple.

The tech topics are very hot right now in DC. So how do we see the relationship? The way that one's an interesting, one of the things that I've talked about, which is a lot of little tech, I think is very much in alignment with some of the things that the FTC is doing. But probably we would do it in a very different, against a different kind of set of practices and behaviors of some of the tech monopolies. It just shows why more conversation is important on these things, because what we think is the kind of abuse of the monopoly, what the lawsuit is, I would say, are not exactly the same thing.

Well, let's talk about that. Let's talk about that for a moment, because this is a good case study of the dynamics here. The traditional free market libertarian view is very critical of antitrust theory in general, and it's certainly very critical of the current prevailing antitrust theories, which are more expansive and aggressive than the ones of the last 50 years, as shown in things like the Apple lawsuit and many other actions recently. There's, for people in business, there's a reflexive view that basically says businesses should be allowed to operate. But then, very specifically, there's this view, or there are certainly people who have this view that basically says any additional involvement of the political machine, especially the sort of prosecutorial machine, in tech, is invariably going to make everything worse in tech.

And so, yeah, they sue Apple today, and maybe you're happy because you don't like Apple today because they abused your startup or whatever, but if they win against Apple, they're just going to keep coming and coming and coming and do more and more of these. The opposing view, the opposite view, would be the view that says no, actually, to your point, the interests of big tech and little tech have actually really diverged. And that actually, if there is not actually strong and vigorous investigation and enforcement, and then ultimately, things like the Apple lawsuit, actually, these companies are going to get so powerful that they may be able to really seriously damage little tech for a very long time. So maybe, Ben, talk a little bit about how we think through that, because we even debate this inside our firm, but talk a little bit through about how to process through that and then what you think, and then also kind of where you think those lines of argument are taking us. Yeah.

Ben Horowitz

So, look, I definitely think. And by the way, right, I mean, I should, full disclosure, when we were at Netscape, we were certainly on the side of little tech against big tech. And Microsoft at that time had a 97% market share on desktop, and it was very, very difficult to innovate on the desktop. It was just bad for innovation to have them in that level of position of power. And I think that's happened on the smartphone now, particularly with Apple.

I think the epic case and the Spotify cases are really great examples of that where I am fielding a product that's competitive with Spotify and I am charging Spotify 30% tax on their product. That seems unfair just from a consumer, just from the standpoint of the world. And it does seem like it's using monopoly power in a very aggressive way. I think it's certainly against our interest and the interest of new companies for the monopolies to exploit their power to that degree. When the government gets involved, it's not going to be a clean, surgical like, okay, here's exactly the change that's needed.

But I also think with these global businesses with tremendous lock in, you just have to at least have the conversation and say, okay, what is this going to do for consumers if we let it run? And we need to represent that point of view. I think from the small tech perspective.

Marc Andreessen

The big tech companies are certainly not doing us favors right now. They're certainly not acting in ways that are pro startups. I think we could say as a journalist. No, no, no. The opposite.

Ben Horowitz

Sure. Quite the opposite. One of my ideas I kick around a lot is it feels like. It feels like companies. It feels like any company is either too scrappy or too arrogant, but never in the middle.

Yeah, yeah, yeah. Like it's an a. It's like people, right.

Marc Andreessen

You'Re either the underdog or you're the overdog. And there's not. Not a lot of. Not a lot of reasonable dogs. Exactly, exactly.

So there's inherent tension there. It seems very hard for these companies to reach a point of dominance and not figure out some way to abuse it. I also think you kind of touch on an important point, which is, in representing little tech, we're not a pure libertarian, anti regulatory kind of force here. We think we need regulation in places. We certainly need it in drug development.

Ben Horowitz

We certainly need regulation in crypto and financial services. The financial services aspect of crypto, it was very, very important. It's very important to the industry that it'd be strong in America with a proper kind of regulatory regime. So we're not anti regulation. We're kind of pro.

The kind of regulation that will kind of make both innovation strong and the country strong. Yeah. And we should also say, look, when we're advocating on behalf of little tech, obviously there's self interest as a component of that because we're a venture capital firm and we back startups, and so there's obviously a straight financial interest there. I will say. I think, Ben, you'd agree with me.

Marc Andreessen

We also feel like philosophically, this is a very pro american position, very pro consumer position. And the reason for that is very straightforward, which is, Ben, as you've said many times in the past, the motto of any monopoly is. What's the motto? We don't care because we don't have to. Right, exactly.

And so it's a very experienced, if. You'Ve called customer service when one of these monopolies has kicked you off their platform. Yes, exactly. Yes, exactly. And so, yeah, it's just that there is something in the nature of monopolies where they just, they have a, you know, if they no longer have to compete and if they're no longer disciplined by the market, they basically go bad.

And then, you know, how do you prevent that from happening? The way you prevent that is from forcing to compete the way that they have to compete. You know, in some cases, they compete with each other, although often they collude with each other, which is another thing. You know, monopoly and cartel are kind of two sides of the same coin. But, you know, really, it's, at least in the history of the tech industry, it's really when they're faced with startup competition, you know, when they've got, you know, when they've got a, when the elephant has a terrier at his heels nipping at him, you know, taking increasingly big bites out of his foot like that, that's when big companies actually act and when they, when they, when they, and when they do new things.

And so without healthy startup competition, you know, like, there are many sectors of the economy where it's just very clear now that there's not enough startup competition because the incumbents that everybody deals with on a daily basis are just practically intolerable. And it's not in anybody's interest ultimately, from a national policy standpoint for that to be the case. Things can get bad where it's to the benefit of the big companies to preserve those monopolies, but very much not to anybody else's benefit. Yeah, exactly. Exactly.

Ben Horowitz

Which is such a big impetus behind our kind of political activity. Yeah, that's right. Okay. We'll keep going. So in what ways do you, okay, now we're going to future looking.

Marc Andreessen

So in what ways do you see the relationship between Silicon Valley and DC evolving in the coming years? And then specifically, and again, what we're going to be not, we're not going to be making partisan recommendations here. But, you know, there is an election coming up and it is a big deal and it's going to have, you know, both, both what happens in the, in the White House and what happens in the Congress is going to have big consequences for everything we've just been discussing. So how do we see the upcoming election affecting tech policy? And, ben, why don't you start?

Ben Horowitz

Yeah, well, I think there are several issues that end up being really important to kind of educate people on now, because whatever platform you run on as a congressperson or as the president, you want to kind of live up to that promise when you get elected. And so a lot of these kind of positions that will persists over the next four years are going to be established now. I think in crypto in particular, we've been very active on this because we have a big donation to something called the fair shake pack, which is work on this and just identifying for citizens which politicians are on what side of these issues, who are the just flat out anti crypto, anti innovation, anti blockchain, anti decentralized technology candidates and lets at least know who they are so that we can tell them we dont like it and then tell all the people who agree with us that we dont like it. A lot of it ends up being, look, we want the right regulation for crypto. Weve worked hard with policymakers to help them formulate things that will prevent scams, prevent nefarious uses of the technology for things like money laundering and so forth, and then enable the good companies, the companies that are pro consumer, helping you own your own data and not have it owned by some monopoly corporation who can exploit it or just lose it, get broken into.

And so you now have identity theft problems and so forth that can kind of help kind of a fairer economy for creatives so that there's not a 99% tech rate or take rate on things that you create on social media and these kinds of things. And so it's just important to kind of, I think, educate the populace on where every candidate stands on these issues. And so we're really, really focused on that. And I think same true for AI, same true for bio. I'd also add, I don't maybe talk a little bit more with the election in a moment, but I'd also add, like, it's not actually the case that there's a single party in DC that's pro tech and a single party that's anti tech.

Definitely not. There's not. And by the way, if that were the case, it would make, might make life a lot easier. Yes, but it's not the case. And I'll just give a thumbnail sketch of at least what I see when I'm in DC and see if you agree with this.

Marc Andreessen

So the Democrats are sort of Democrats, much more fluent in tech. And I think that has to do with who their kind of elites are. It has to do with this kind of very long established revolving door. And I mean that in both the positive and pejorative sense between the tech companies and the Democratic Party, democratic politicians political offices, congressional offices, White House offices, there's just a lot more integration. You know, the big tech companies tend to be very democratic, which you see in all the donation numbers and voting numbers.

And so there's just like, there are just a lot more, I would say, tech fluent, tech aware Democrats, especially in powerful positions. You know, many of them have actually worked in tech companies. Just as an example, the current White House chief of staff is former board member at Meda, where I'm on the board. And so there's a lot of sort of connective tissue between those. Having said that, the current Democratic Party, and in particular, certain of its more radical wings, have become extremely anti tech, to the point of being arguably, in some cases, outright anti business, anti capitalism.

And so there's a real kind of back and forth there. Republicans, on the other hand, in theory, and the stereotype, would have you believe Republicans are inherently more pro business and more pro free markets and should therefore be more pro tech. But I would say there again, it's a mixed bag because, number one, a lot of Republicans just basically think of Silicon Valley, that it's all Democrats. And so Silicon Valley is all Democrats. If we're Republicans, that means they're de facto the enemy.

They hate us. They're trying to defeat us. They're trying to defeat our policies. And so they must be the enemy. And so there's a lot of, I would say, some combination of distrust and fear and hate kind of on that front.

And then, again, with much less connective tissue, there are many fewer republican executives at these companies, which means there are many fewer republican officials or staffers who have tech experience. And so there's a lot of mistrust. And, of course, there have been flashpoint issues around this lately, like social media, censorship, that have really exacerbated this conflict. And then the other thing is they're very serious policy disagreements. And again, there are at least wings of the modern republican party that are actually quite sort of economically interventionist.

And so the term of the moment is industrial policy, which basically there are Republicans who are very much in favor of a much more interventionist government approach towards dealing with business and in particular dealing with tech. I guess I'd say there's real, like, this is not a BIA, this is not an either or thing. Like, there are real issues on both sides. The way we think about that is that therefore, there's a real requirement to engage on both sides. There's a real requirement, Ben, to your point, to educate on both sides.

And there's a real, you know, you know, if you're going to make any progress at tech issues, there's a real need to have a bipartisan approach here because you do have to actually work with both sides. Yeah. And I think that's absolutely right. And just to kind of name names a little, if you look at, like, the democratic side, you know, you've got people like Richie Torres out of the Bronx, and by the way, a huge swath of the Congressional Black Caucus that sees, wow, crypto is a real opportunity to equal the financial system, which has historically been documentedly racist against a lot of their constituents. And then also the creatives, which they represent a lot, to get a fair shake.

Ben Horowitz

And then on the other hand, you have Elizabeth Warren, who has taken a very totalitarian view of the financial system and is moving to consolidate everything in the hands of a very small number of banks and basically control who can participate and who cannot in finance. So these are just very, very different views out of the same party. And I think that we need to just make the specific issues really, really clear. Yeah. And the same thing.

Marc Andreessen

We could, you know, spend a long time also naming names on the republican side. So. Yes, which we'll do later, but so, yeah, well, I should do it right now just to make sure that we're fair on this. You know, there are Republicans, you know, there are Republicans who are, like, full on pro free market, you know, you know, very much pro, are very opposed to all current government efforts to, you know, intervene in markets like AI and crypto. By the way, many of those same Republicans are also very pro, are also very negative, any, any antitrust action.

They're very ideologically opposed to antitrust. And so they would also be opposed to things like the Apple lawsuit that a lot of startup founders might actually like. And then on the flip side, you have folks like Josh Hawley, for example, that are, I would say, quite vocally, I would say, irate at Silicon Valley and very in favor of much more government intervention and control. I think a Holley administration, just as an example, would be extremely interventionist in Silicon Valley and would be very pro industrial policy, very much trying to both set goals and have government management of more of tech, but also much more dramatic action against at least perceived a real enemy. So it's the same kind of mix back.

Yeah. So anyway, I wanted to go through that, though. This is kind of the long, winding answer to the question of how will the upcoming election affect tech policy, which is, look, there are real issues with the Biden administration, in particular with the agencies and with some of the affiliated senators, as Ben just described. So there are certainly issues where the agencies, under the Trump administration, the agencies, would be headed by very different kinds of people. Having said that, it's not that.

It's not that a Trump presidency would necessarily be a clean win. And there are many people in sort of that wing who might be hostile, by the way, in different ways or actually might be hostile in some cases, in the same ways. Yeah. And by the way, Trump himself has been quite the moving target on this. He was very, he tried to ban TikTok, and now he's very pro tick tock.

Ben Horowitz

He has been negative on AI, who's originally negative on crypto, is not positive on crypto. You know, it's complex and. Yeah. Which is why I think, the foundation of all of this is education and we, why we're spending so much time in Washington and so forth, is to make sure that we communicate all that we know about technology so that at least these decisions are highly informed that the politicians make good. Okay.

Marc Andreessen

So moving forward. So three part, three questions in one. So Alex asks, as tech regulation becomes more and more popular within Congress, which is happening, do you anticipate a lowering, in general, of the rate of innovation within the industry? Number two, Tyler asks what is a key policy initiative that, if passed in the next decade, could bolster the US for a century. And then Elliot Parker asks, what's one regulation that, if removed, would have the biggest positive impact on economic growth?

Yeah, so I think that, Ben, if you disagree with this, I don't know that there's a single regulation or a single law or a single issue. There are certainly, I mean, there are certainly individual laws or regulations that are important, but I think the thematic thing is a much bigger problem or much bigger. The thematic thing is the thing that matters. Things that are coming are much more serious than the things that have been. I think that's correct.

Ben Horowitz

Yeah. Okay, we'll talk about that. Yeah, go ahead. Yeah, I mean, so, you know, if you look at the current state of regulation, you know, if it stayed here, there's not anything that like, we really feel like a burning desire to remove, in the same way that things that are on the table could be extremely destructive. And basically, look, if we ban large language or large models in general, or we force them to go through some kind of government approval, or if we ban open source technology, that's just a devastating.

It would basically take America out of the AI game and make us extremely vulnerable from a military standpoint, make us extremely vulnerable from technology standpoint in general. And so thats devastating. Similarly, if we dont get proper regulation around crypto, the trust in the system and the business model is going to fade or is going to be in jeopardy, and that its not going to be the best place in the world to build crypto companies and blockchain companies, which would be a real shame. The kind of analog would be the creation of the SEC after the Great Depression, which really helped put trust into the US capital markets. And I think that trust into the blockchain system as a way to invest, participate, be a consumer, be an entrepreneur, are really, really important and necessary and very important to get those right.

Marc Andreessen

Okay. And then, speaking of okay, let's move straight into the specific issues then more. So expand on that. So Lenny asks, what form do you think AI regulation will take over the next two administrations? B.

Sikhandi asks, will AI regulation result in a concentrated few companies or an explosion of startups and new innovation? Eray asks, how would you prevent the AI industry from being monopolized, centralized with just a few tech corpse? And then our friend Beth Jizos asks, how do you see the regulation of AI, compute and open source models realistically playing out? Where can we apply pressure to make sure we maintain our freedom to build and own AI systems? It's really interesting because there's like a regulatory dimension of that.

Ben Horowitz

And then there's the kind of technological kind of version of that. And they do intersect. So if you look at what big tech has been trying to do, they're trying, they're very worried about new competition to the point where they've taken upon themselves to go to Washington and try and outlaw their competitors. And if they succeed with that, then I think it is like super concentrated AI power, making the kind of concentrated power of social media or search or so forth look kind of really pale in comparison. I mean, it would be very dramatic if there were only three companies that were allowed to build AI, and that's certainly what they're pushing for.

So I think in one regulatory world where big tech wins, then there's very few companies doing AI, probably Google, Microsoft and Meta Microsoft having basically full control of OpenAI as they demonstrated. They have the source code, they have the way to such a one as far as saying that we own everything, and then they also control who the CEO is, as they demonstrated beautifully. So if you take that, it will all be owned by three, maybe four companies if you just follow the technological dimension. I think what we're seeing play out has been super exciting in that we were all kind of wondering, would there be one model that ruled them all? And even within a company, I think we're finding that there's no current architecture that's going to gain on a single thing, a transformer model, a diffusion model and so forth, that's going to become so smart in itself that once you make it big enough, it's just going to know everything.

And that's going to be that. What we've seen is, you know, even the large companies are deploying a technique called the mixture of experts, which kind of implies, you know, you need different architectures for different things. They need to be integrated in a certain way, and the system has to work. And that just opens the aperture for a lot of competition because there's many, many ways to construct a mixture of experts to architect every piece of that. We've seen little companies like mistral field models that are highly competitive with the larger models very quickly.

And then there's other kind of factors like latency, cost, et cetera, that factor into this. And then there's also good enough when is a language model good enough when it speaks English, when it knows about what things, what are you using it for? And then there's domain specific data. I've been doing whatever medical research for years, and I've got data around all these kinds of genetic patterns and diseases and so forth. I can build a model against that data that's differentiated by the data and so on.

So I think what we will, we're likely to see kind of a great kind of cambrian explosion of innovation across all sectors, big companies, small companies, and so forth, provided that the regulation doesn't outlaw the small companies. But that would be my prediction right now. Yeah. And I'd add a bunch of, bunch of things to this. So one is, even on the big model side, there's been this leapfrogging thing that's taking place, and so there's opening.

Marc Andreessen

GPT four was kind of the dominant model not that long ago. And then it's been leapfrogged in significant ways recently by both Google, what's their Gemini pro, especially the one with the so called long context window where you can feed it 700,000 words or an hour of full motion video as context for a question, which a huge advance. And then the anthropic, their big model, Claude, is a lot of people now are finding that to be a more advanced model than GPT four. And one assumes OpenAI is going to come back. And this leapfrogging will probably happen for a while.

So even at the highest end at the moment, these companies are still competing with each other. You know, there's still this leapfrogging that's taking place. And then, you know, Ben, as you articulated very well, you know, there is this, this giant explosion of models of all kinds of shapes and sizes are another, you know, our company, databricks, just released another, you know, another what looks like a big leapfrog. On the smaller model side, it's the, I think it's the best small model now in the benchmarks. And it is.

It actually, it's so efficient, it will run in a MacBook. Yeah. And they have the advantage of, you know, as an enterprise, you can connect it to a system that gives you not only, like, enterprise quality, access control and all that kind of thing, but also, you know, gives you the power to do SQL queries with. It gives you the power to basically create a catalog so that you can have a common, understood definition of all the weird corporate words you have. Like, by the way, one of which is customer like.

Ben Horowitz

There's almost no two companies that define customer in the same way. And in most companies, there are several definitions of, of customer. Is it a department at, at and t? Is it at and t? Is it some division of at and T, et cetera, et cetera.

Marc Andreessen

I don't want to literally speak for them. But I think if you put the CEO's of the big companies under truth serum, I think what they would say is their big fear is that AI is actually not going to lead to a monopoly for them. It's going to lead to a commodity, it's going to lead to a sort of a race to the bottom on price. You see that a little bit now, which is people who are using one of the big models APIs are able to swap to another big model API from another company pretty easily. And then these models, the main business model for these big models, at least so far, is an API basically pay per token generated or per answer.

And so if these companies really have to compete with each other, it may be that it actually is a hyper competitive market. It may be the opposite of a search market or an operating system market. It may be a market where there's just continuous competition and improvement and leapfrogging, and then constant price competition. And then of course, the payoff from that to everybody else in the world is an enormously vibrant market where there's constant innovation happening, and then there's constant cost optimization happening where and then as a customer downstream of this, the entire world that's going to use AI is going to benefit from this kind of hyper competition that's going to could potentially run for decades. And so I think if you put these CEO's in a true serum, what they would say is that's actually their nightmare.

Ben Horowitz

That's why they're in Washington. That's why they're in Washington. So that is what's actually happening. That is the scenario they're trying to prevent. They are actually trying to shut off competition.

Marc Andreessen

And by the way, actually, I will tell you this, there is a funny thing. Tech is so ham, tech is so historically bad at politics that I think some of these folks think they're being very clever in how they go about this. And so, you know, because they show up in Washington with the kind of narrative, you know, kind of public service narrative or end of the world narrative or whatever it is. And they're like, I think they think that they're going to very cleverly kind of trick everybody, trick people in Washington into giving them a sort of cartel status. And the people in Washington won't realize until it's too late.

But it actually turns out people in Washington are actually quite cynical. They've been lobbied before. Exactly. And so there is this thing, and I get this from them off the record a lot, especially after a couple of drinks, which is basically, if you've been in Washington for longer than two minutes, you have seen many industries come to Washington. Many big companies come to Washington and want monopoly or cartel kind of regulatory protection.

And so you've seen this, and if you're in Washington, you've seen this play out in some cases. The guys have been there for a long time, dozens or hundreds of times. And so my sense is like nine months ago or something, there was a moment where it seemed like the big tech companies could kind of get away with this. I think it's actually, I think actually the edges, I'm still concerned and we're still working on it, but I think the edges come off a little bit because I think the cynicism of Washington in this case is actually correct. And I think they're kind of onto these companies.

And then, look, if there's a unifying issue, there's basically two unifying issues in Washington. One is they don't like China, and the other is they don't like big tech. And so this is a winnable war. This is a winnable war on behalf of startups and open source and freedom and competition. And so I'm actually, yeah, I'm worried, but I'm feeling much better about it than I was nine months ago.

Ben Horowitz

Yeah, well, look, we had to show up. I mean, that's the other thing. I mean, it's taught me a real lesson, which is you can't expect people to know what's in your head. You've got to go see them. You've got to put in the time, you've got to kind of say what you think, and then if you don't, you don't have any right to wring your hands with how bad things are.

Marc Andreessen

Then I just wanted to know one more thing. Just for meta, you kind of mentioned the big companies being Microsoft, Google, Meta. It was worth noting meta is on the open source side of this. And so Meta is actually working quite hard. And this is a big deal because it's very contrary to the image.

I think people have a meta over in prior issues, correctly or not. But on the open source AI topic and on freedom and innovate, at least for now, Meta is, I think, very strongly on that side. Yeah, yeah, yeah, I think that's right. It's actually a very interesting point. And kind of, I think, essential for people to understand is that the way Meta is thinking about this and the way that they're actually behaving and executing is very similar to how Google thought about Android, where their main concern was that Apple not have a monopoly on the smartphone, you know, not so much that they make money on the smartphone themselves, because a monopoly on the smartphone for Apple would mean that Google's other business was in real jeopardy.

Ben Horowitz

And so they ended up being kind of an actor for good. Android's been an amazing thing for the world, I think, including getting smartphones in the hands of people who won't be able to get them otherwise all over the world. And Meta is doing kind of a very similar effort where in order to make sure that they have AI as a great ingredient in their products and services, is willing to open source it and kind of gives all of their very, very kind of large investment in AI to the world so that entrepreneurs and everybody can kind of keep them competitive, even though they don't plan to be in the business of AI in the same way that Google is in the business of smartphones to some extent, but it's not their key business. And meta doesn't have a plan to be in the AI business. Maybe to some extent they will, but that's not the main goal.

Marc Andreessen

And then I would put one other company on the concerning side on this and it's truly to tell where they're going to shake out. But Amazon just announced they're investing a lot more money in anthropic. So I think they're now basically Amazon is too anthropic what Microsoft is to open AI. I think that's. Yeah, and so, like there's a.

Anthropic is very much in the group of kind of big tech, you know, kind of new incumbent big tech, you know, lobbying very aggressively for regulation and regulatory capture in DC. And so I think it's sort of an open question whether Amazon is going to pick up that agenda as openly as anthropic and clearly becomes effectively a subsidiary, subsidiary of Amazon. Yeah, well, this is another place where we're on the side of Washington, DC, and the current current regulatory motion where, yeah, the big tech companies have done this thing which we thought was illegal because we observed it occur at AOL and people went to jail. But what they've done is they invest in startups, huge amounts of money. Microsoft and Amazon and Google are all doing it like billions of dollars with the requirement, with the explicit requirement that those companies then buy GPU's from them at not the discount that they'd ordinarily get, but at a relatively high price and then be in their clouds.

Ben Horowitz

So that kind of, and then in the Microsoft case, even more aggressive. Give me your source code. Give me your weights. Which is extremely aggressive. So they're moving money from the balance sheet to their p and L in a way that, at least from an accounting standpoint, it was, our understanding, wasn't legal, and the FTC is looking at that now.

But it'll be interesting to see how that plays out. Yeah, well, the other is, that's one area. Another issue, you know, that people should watch is, you know, that's why it's around tripping. The other one is just consolidation. You know, if you own, you know, half of a company and you get to appoint the management team, like, is that, you know, is that a subsidy?

Marc Andreessen

Like, is that not a subsidiary? You know, there are rules on that. Like, at what point you own the. Company equity, you own the intellectual property of the company, and you control the management team. Yeah.

Ben Horowitz

Is that not your company? Yeah. And then at that point, if you're not consolidating it, like, is that legal? And so the SEC is going to weigh in on that. And then, of course, you know, to the extent that some of these companies have nonprofit components to them, there's, you know, tax implications to the conversion to for profit and so forth.

Marc Andreessen

And so, like, there's a lot of. Yeah, this, this, yeah, the stakes. The stakes in the legal. I would say the stakes in the legal, regulatory and political game that's being played here, I think are quite, quite high. Quite high.

Ben Horowitz

Yes.

Marc Andreessen

Ben and I, as Ben mentioned this, Ben and I are old enough where we do know a bunch of people who have gone to jail. So some, some of these issues turn out to be serious. So Gabriel asks, what would happen if there was zero regulation of AI, the good, the bad, and the ugly? And this is, this is actually a really important topic. So, you know, we're vigorously arguing, you know, in DC that there should be, you know, basically anybody should be completely capable of building AI and deploying AI.

The companies should be allowed to do it. Small companies should be allowed to do it, open source should be allowed to do it. And look, a lot of the regulatory pushes we've been discussing that comes from the big companies and from the activists is to prevent that from happening and put everything in the hands of the big companies. So we're definitely on the side of freedom to innovate. Having said that, that's not the same as saying no regulations of anything ever.

We're definitely not approaching this with a hardcore libertarian lens. The interesting thing about regulation of AI is that it turns out when you go down the list of the things that I would say reasonable people, kind of sort of thoughtful people, consider to be concerns around AI on both sides of the aisle. Basically, the implications that they're worried about are less the technology itself and are more the use of the technology in practice, either for good or for bad. And so, Ben, you brought up, for example, if AI is making decisions on things like granting credit or mortgages or insurance, then there are very serious policy issues around how those answers are derived at which groups are affected in different ways. The flip side is, if AI is used to plan a crime or to plan a bank robbery or something like that, or a terrorist attack, that's obviously something that people focused on national security, law enforcement are very concerned about.

Look, our approach on this is actually very straightforward, which is it seems like completely reasonable to regulate uses of AI and things that would be dangerous. Now, the interesting thing about that is, as far as I can tell, and I've been talking to a lot of people in DC about this, as far as I can tell, every single use of AI to do something bad is already illegal under current laws and regulations. And so it's already illegal to be discriminatory in lending. It's already illegal to redline in mortgages. It's already illegal to plan bank robberies.

It's already illegal to plan terrorist attacks. Like these things are already illegal. And there's decades or centuries of case law and regulation and law enforcement and intelligence capabilities around all of these. And so to be clear, we think it's completely appropriate that those authorities be used. And if there are new laws or regulations needed due to other bad uses, that makes total sense.

But that basically the issues that people are worried about can be contained in control, the level of the use, as opposed to somehow saying, you know, by the way, as some of the doomer activists do, you know, we need to literally prevent people from doing linear algebra on their computers. Yeah, well, I think that's important to point out, like, what is AI? And it turns out to be, you know, it's math and specifically kind of like a mathematical model. So you can think of it for those of you who study math in school, in math, you can have an equation like y equals x squared plus b or something. And that equation can kind of model the behavior of something in physics or something in the real world, so that you can predict something happening, like the speed that an object will drop or so forth and so on.

Ben Horowitz

And then AI is kind of that, but with huge computer power applied so that you can have much bigger equations with, you know, instead of two or three or four variables, you could have, you know, 300 billion variables. And so if you get into the challenge with that, of course, is if you get into regulating math and you say, well, math is okay up to a certain number of variables, but then at the, you know, 2,000,000,000th and first variable, then it's dangerous, then you're in a pretty bad place, and that you're going to prevent everything good from the technology from happening as well as anything that you might think is bad. So you really do want to be in the business of regulating the kind of applications of the technology, not the math, in the same way that you wouldn't have wanted to. Like, nuclear power is very potentially dangerous, as nuclear weapons are extremely dangerous. You wouldn't want to kind of put parameters around what physics you could study in order to, you know, like, literally in the abstract, in order to kind of prevent somebody from getting a nuke.

Like, you can no longer study physics in Iran because then you might be able to build a nuke would be kind of the conclusion. And that has been kind of what big tech has been pushing for, not because they want safety, but because, you know, again, they want a monopoly. And so I think we have to be very, very careful not to do that. I do think there will probably be, you know, some cases that come up that are enabled by AI, new applications that do need to be, you know, regulated potentially. You know, for example, I don't know that there's a law that, like, if you recreate, like, something that sounds exactly like Drake and then kind of put out a song that sounds like a Drake song, like, I don't know that that's illegal.

Maybe that should be illegal. I think those things need to be considered, for sure. And there's certainly danger in that. I also think we need technological solutions, not just regulatory solutions for things like deepfakes that kind of help us get to what's human, what's not human, and interesting. A lot of those are viable now based on blockchain crypto technology.

Yeah. So let's just on the voice thing real quick. I believe this to be the case. It is not currently possible to copyright a voice. Yeah.

Right. You can copyright lyrics and you can copyright music, and you can copyright tunes, right. Melodies and so forth. Copyright a voice. And.

Marc Andreessen

Yeah, that seems like a perfect example where that. It seems like that probably is a good idea to have a law that lets you copyright your voice. Yeah, I feel that way. You know, particularly if people call their voice Drake squared or something. Right.

Ben Horowitz

Like, you know, it could get very dodgy again. You know, you get, again, it's just the details. You know, trademark. You can trademark your name, so you could probably prosecute on that. But by the way, having said that, look, this also gets to the complexity of these things.

Marc Andreessen

There is actually an issue around copywriting a voice, which is. Okay, well, how close to the voice of Drake does. Like, there are a lot of people have, like, a lot of voices in the world. And, like, how close do you have to get before you're violating copyright? And what if my natural voice actually sounds like Drake?

Like, am I now in trouble? Right. And do I outlaw, like, Jamie Foxx imitating Quincy Jones and that kind of thing? Right, exactly. So anyway, yeah, but I mean, look, agreeing violently with you on this is like, that seems like a great topic that needs to be taken up and looked at seriously.

From a legal standpoint, that is obviously an issue that's elevated by AI. But it's a general concept of being able to copyright and trademark things, which has a long history in us law. Yeah, for sure. Let's talk about the decentralization and the blockchain aspects of this. I want to get into this.

Goose asks how important is the development of decentralized AI? And how can the private sector catalyze prudent, pragmatic regulations to ensure us retains innovation leadership in this space? So let's, Ben, let's talk about, well, let's talk about decentralized AI, and then maybe I'll just, I'll highlight real quick and then you can build on it. Decentralized AI, like, you know, the sort of default way that AI systems are being built today is with basically supercomputer clusters in a cloud. And so you'll have a single data center somewhere that's got, you know, 10,000 or 100,000 chips and then a whole bunch of systems interconnect them and make them all work.

And then you have a company, you know, that basically, you know, owns and controls that. And, you know, these companies, AI companies are raising a lot of money to do that. Now, these are very large scale, centralized kinds of operations. And to train us state of the art model year at $100 million plus, to train a big one, to train a small one like the databricks model that just came out, it's on the order of $10 million. And so these are large centralized efforts.

And by the way, we all think that the big models are going to end up costing a billion and up in the future. Then this raised the question of is there an alternate way to do this? And the alternate way to do this is, we believe strongly is with a decentralized approach and in particular with a blockchain based approach. It's actually the kind of thing that the blockchain web three kind of method seems like it would work with very well. And in fact, we are already blocking backing companies and startups that are doing this.

And then I would say there's at least three obvious layers that you could decentralize that seem like they're increasingly important. So one is the training layer. Well, actually, let me say four, there's the training layer, which is building the model. There's the inference layer, which is running the model to answer questions. There's the data layer.

Ben, to your point, on opening up the black box of where the data is coming from, which is there should probably be a blockchain based system where people who own data and contribute it for training of AI's and then get paid for it, and where you track all that. And then there's a fourth that you alluded to, which is deepfakes. It seems obvious to us that the answer to deepfakes, and I should pause for a second and say, in my last three months of trips to DC, the number one issue politicians are focused on with AI is deepfakes. It's the one that directly affects them. And I think every politician right now who's thought about this has a nightmare scenario of it's three days before their reelection campaign, three days before the deep fake goes out with them saying something absolutely horrible and it's so good, and the voters get confused and then they lose the election on that.

And so I would say that's actually the thing, that's the thing that actually has the most potency right now. And then what basically a lot of people say, including the politicians, is. So therefore, we need basically a way to detect deepfakes. And so either the AI systems need to watermark AI generated content so that you can tell there's a deepfake, or you need these kind of scanners, like the scanners that are being used in some schools now to try to detect that something was AI generated. Our view, as I would say, both technologists and investors in the space, is that the methods of detecting AI generated content after the fact are basically not going to work.

And they're not going to work because AI is already too good at doing this. And by the way, for example, if you happen to have kids that are in a school and they're running one of these scanner programs that is supposed to detect whether your kid is submitting an essay or use chat gp to write the essay. Those really don't work in a reliable way. And there's a lot of both false positives and false negatives off of those that are very bad. So those are actually very bad ideas.

And for the same reason, like detection of AI generated photos and videos and speech is not going to be possible. And so our view is you have to flip the problem, you have to invert the problem. And what you have to do instead is basically have a system in which real people can certify that content about. Them is real, where content has provenance as well, where you. Go ahead, Ben.

Go ahead and describe how that would work. Yeah, so we have, like, one of the amazing things about crypto blockchain is it deploys something known as a public key infrastructure, which enables kind of every human to have a key that's unique to them where they can sign. So, like, if I was in a video or in a photo or I wrote something, I can certify that, yes, this is exactly what I wrote, and you cannot alter it to make it into something else. It is just exactly that. And then as that thing gets transferred through the world, let's say that it's something like a song that you sell and so forth, you can track just like, in a less precise way, but with the work of art, we track the provenance or with a house who owned it before you and so forth, that's also like an easy application on the blockchain.

Ben Horowitz

And so that combination of capabilities can make this whole kind of program much more viable in terms of, like, okay, knowing what's real, what's fake, where it came from, you know, where it started, where it's going and so forth. You know, kind of going back the data one, I think, is really, really important in that, you know, these, these systems, you know, one of the things that they've done that's, I would say, dodgy. And, you know, there have been, like, big pushback against it with, you know, Elon trying to lock down Twitter and the New York Times suing OpenAI and so forth. These systems have gone out and just slurped in data from all over the Internet and all over kind of people's businesses and so forth and trained their models on them. And I think that there's a question of whether the people who created that data should have any say in whether the model is trained on that data.

And blockchain is an unbelievably great system for this because you can permission people to use it, you can charge them a fee. It can be all automated in a way where you can say, sure, come train. And I think training data ought to be of this nature, where there's a data marketplace and people can say, yes, take this data for free. I want the model to have this knowledge or no, you can't have it for free, but you can have it or no, you can't have it at all, rather than what's gone on, which is this very aggressive scraping. And you have these very smart models where these companies are making enormous amounts of money taken from data that certainly didnt belong to them.

Maybe its in the public domain or what have you, but that ought to be an explicit relationship, and its not today. And thats a very great blockchain solution. And part of the reason we need the correct regulation on blockchain and we need the SECs to stop harassing and terrorizing people trying to innovate in this category. And so that's kind of the second category. And then you have like training and inference.

And I would say right now the push against kind of decentralized training and inference is, well, you know, you need this very fast interconnect and you need it to all be in one place technologically. But, and I think that's true for people who have more money than time, right, which is like startups and big companies and so forth. But for people in academia who have more time than money, they're getting completely frozen out of AI research. You can't do. There's not enough money in all of academia to participate anymore in AI research.

And so having a decentralized approach where you can share all the GPU's across your network and hey, yeah, maybe it takes a lot longer to train your network or to serve it, but you know what? You still can do your research. You can still innovate, create new ideas, new architectures, and test them out at large scale, which will be amazing if we can do it. And again, we need the SEC to stop kind of illegally terrorizing every crypto company and trying to block laws from being put in place that help us enable this. Yeah, there's actually a really, and you alluded to it, the college thing actually really matters.

Marc Andreessen

So we have a friend, you know, who runs one of the, you know, is very involved in one of the big computer science programs at one of the major american research universities. And of course, by the way, a lot of the technology we're talking about was developed at american research universities, right? Yeah, and canadian ones, too. Toronto. Canadian ones and european ones.

Exactly. You know, historically, as with every other wave of technology, in the last, you know, whatever, 100 years, you know, the research, our research universities, you know, across these countries have been kind of the gems of the, you know, the wellsprings of, of a lot of the new technology that have ended up powering the economy and everything else around us. We have a friend involved in running one of these, and this friend said a while ago that he said that his concern was that his university would be unable to fund a competitive AI cluster, basically a compute grid that would actually let students and professors at that university actually work in AI because it's now getting to be too expensive and universities are just not funded to do, do have capex programs that big. And then he said his concern more recently has been all research universities together might not be able to afford to do that, which means all universities together might not be able to actually have basically cutting edge AI work happening on the university side. And then I happen to have a conversation.

I was in DC, I was in a bipartisan house meeting the other day on these AI topics. And actually one of the, in this case, democratic Congress women asked me the question which comes up, which is a very serious question always, which is, how do you get kind of more, more members of unrepresented groups, underrepresented groups involved in tech? And, you know, I found myself giving the same answer that I always give on that, which is the most effective thing you need to do is you need to go upstream and you need to have more people coming out of college with computer science degrees who are skilled and qualified and trained, right. And mentored to be able to participate in the industry. And, you know, that's, you and I, Ben, both came out of state schools, you know, with, you know, with computer science programs that, you know, where we were able to then have the careers we've had.

And so I find myself answering the question saying, well, we need more computer science graduates from every group. But in the back of my head I was like, and it's going to be impossible to do that because none of these places are going to be able to afford to actually have the compute resources to be able to actually have AI programs in the future. Maybe the government can fix this by just dumping a ton of money on top of these universities, and maybe that's what will happen. And the current political environment seems like maybe it's not quite feasible for a variety of reasons. And then the other approach would be a decentralized approach, would be a blockchain based approach that everybody could participate in if that were something that the government were willing to support, which right now it's not.

And so I think there's a really, really, really central, important, vital issue here that I think is being glossed over by a lot of people that I think should really be looked at. Yeah, no, I think it's absolutely critical. And this is, again, kind of going back to our original thing. It's so important to the country being what America being what America should be to get these issues right. And we're definitely in danger of that not happening because, look, I think people are taking much too narrow a view of some of these technologies and not understanding their full capabilities.

Ben Horowitz

And we get into, oh, the AI could say something racist, therefore we won't cure cancer. I mean, we're getting into that kind of dumb idea and we need to have a tech forward kind of solution to some of these things and then the right regulatory approach to kind of make the whole environment work. So, Leo, let's go into that next phase of this now, which is the sort of global implications. So I'm going to conjoin two different topics here, but I'm going to do it on purpose. So, Michael, Frank Martin asks, what could the US do to position itself as the global leader of open source software?

Marc Andreessen

Do you see any specific legislation or regulatory constraints that are hampering the development of open source projects? Arda asks similar question. What would an ideal AI policy for open source software models looks like? And then Sarah Holmes asks the China question. Do you think we will end up with two AI tech stacks, the west and China?

And ultimately, companies will have to pick one side and stay on it. And so, look, I would say this is where you get to, like the really, really big geopolitical, long term issue, which is basically my understanding of things is sort of as follows, which is basically for a variety of reasons, technological development in the west is being centralized into the United States, some in Canada and some in Europe, although quite frankly, a lot of the best canadian and european tech founders are coming to Silicon Valley. Jan Lecun teaches. Jan Lecun is a hero in France, teaches at NYU, and works at meta, both of which are american institutions. They're sort of an american, or let's say american plus european kind of, sort of tech vanguard wedge in the world.

And then there's China. And really it's actually quite a bipolar situation, I would say the dreams of tech being fully democratized and spreading throughout the world have been realized for sure on the you side, but not nearly as much on the entrepreneurship side or the invention side. And again, immigration, immigration being a great virtue, but for the countries that are the beneficiaries of immigration, the other side of that is, it makes sense. You know, other countries are going to be less competitive because they're their best and brightest moving to the US. So, so anyway, so we are in a bipolar, we are in a bipolar tech world.

It's primarily a bipolar tech world. It's primarily the US and China. You know, this is not the first time we have been in a bipolar world involving, you know, geopolitics and technology. You know, they're, the US and China have two very different systems. The chinese system has all of the virtues and downsides of being centralized.

The US system has all the virtues and downsides of being more decentralized. There is a very different set of views of the two systems on how society should be ordered and what freedom means and what people should be able to do and not do. And then look, both the US and China have visions of global supremacy and visions of basically agendas and programs to carry forward their points of view on the technology of AI and on the societal implications of AI throughout the world. And so there is this cold war, oh, and then the other thing is just in DC, it's just crystal clear that there's this now dynamic happening where Republicans and Democrats right now are trying to leapfrog each other every day on being more anti China. And so our friend Neil Ferguson is using the term I think Cold War 2.0 like we're, whether we want to or not, like we're in Cold War 2.0, like we're in a dynamic similar, the one was with the USSR 30, 40, 50 years ago.

To Sarah Holmes question, it's 100% going to be the case. There are two AI tech stacks and there are two AI governance models and there are two AI deployment systems, and there are two ways in which AI dovetails in everything from surveillance to smart cities to transportation, self driving cars, drones, who controls what, who gets access to what, who sees what, the degree, by the way, to which AI is used as a method for population control. There are very different visions and these are national visions and global visions, and theres a very big competition developing and it certainly looks to me like theres going to be a winner and a loser. I think its overwhelmingly in our best interest for the US to be the winner. For the US winner, we have to lean into our strengths.

And the downside of our system is that we are not as well organized and orchestrated top down as China is. The upside of our system, at least historically, is that we're able to benefit from decentralization. We're able to benefit from competition, from a market economy, from a private sector where we're able to basically have a much larger number of smart people making lots of small decisions to be able to get to good outcomes, as opposed to having a dictatorial system in which there's a small number of people trying to make decisions. Look, this is how we won the cold war against Russia. Our decentralized system just worked better economically, technologically, and ultimately militarily than the soviet centralized system.

And so it just seems like fairly obvious to me that we have to lean into our strengths. We better lean into our strengths because if we think we're just going to be another version of a centralized system, but without all the advantages that China has with having a more centralized system, that just seems like a bad formula. So, yeah, let me pause there and, Ben, see what you think. I know for sure. I think that'd be disastrous.

Ben Horowitz

And I think this is why it's so clear that if there's one, to answer the question, if there's one regulatory policy that we would enact that would ensure americas competitiveness, it would be open source. And the reason being that, as you said, this enables the largest number of participants to kind of contribute to AI, to innovate, to come up with novel solutions and so forth. And I think that youre right. Chinas, you know, whats going to happen in China is theyre going to pick one because they can. And theyre going to kind of drive all their wood behind that arrow in a way that we could never do because we just don't work that way.

And they're going to impose that on their society and try and impose it on the world. And, you know, our best counter to that is to put it in the hands of all of our smart people. I have so many smart people from all over the world from, you know, like, as we like to say, diversity is our strength. We've got this tremendous different points of view, different, you know, kind of kinds of people in our country. And, you know, the more that we can enable them, the more likely we'll be competitive.

And ill give you a tremendous example of this is, I think if you go back to 2017 and you read any foreign policy magazine, et cetera, there wasnt a single one that didnt say China was ahead in AI. They have more patents, they have more students going to universities. Theyre heading AI. Theyre ahead in AI, were behind AI. And then chat GPT comes out and goes, oh, I guess were not behind an AI.

We're ahead in AI. And the truth of it was what China was ahead on was integrating AI into the government, their one AI into their government in that way. And look, we're working on doing a better job of that with american dynamism, but we're never going to be good at that model. That's the model that they're going to be great at. And we have to be great at our model.

And if we start limiting that, outlawing startups and outlawing anybody but the big companies from developing AI and all that kind of thing, we'll definitely shoot ourselves in the foot. I would say related or like another kind of important point, I think, in kind of the safety of the world is when you talk about two AI's, that's like two AI stacks, perhaps. But it's very important that countries that are in America, that are in China can align AI to their values. And I'll just give you kind of one really important example, which, you know, look, I've been spending a lot of time in the Middle east, and if you look at the kind of history of a country like Saudi Arabia, they're coming from a world of fundamentalism and a kind of set of values that they're trying to modernize. And they've done tremendous things with women's rights and so forth.

But look, there's still the fact that they've got people who don't want to go to that future so fast, and they need to preserve some of their history in order to not have a revolution or extreme violence and so forth. And yeah, we're seeing al Qaeda re spark up in Afghanistan and all these kinds of things, which are, by the way, al Qaeda's real enemy of modern Saudi just as much as they're an enemy of America. And so if Saudi can't align an AI to the current saudi values, they could literally spark a revolution in their country. And so it's very important that as we have technology that we develop, that it not be totally proprietary, closed source, that it be kind of modifiable by our allies who need to kind of progress at their pace to keep their kind of country safe and keep us safe in doing so. And so this has got great geopolitical ramifications.

What we do here, if we go to the China model that Google and Microsoft are advocating for, this chinese model of only a few can control AI. We're going to be in big trouble. Yeah, and then I just want to close on the open source point because it's so critical. So this is where I say I get extremely irate at the idea of closing down open source, which people and a number of these people are lobbying for very actively, by the way. I'm going to name one more name.

Marc Andreessen

We even have VC's lobbying to outlaw open source, which I find to just be completely staggering. And in particular, Vinod. Vinod. Vinod. So Vinod Kozla, who is, this is just incredible to me.

He's the founder of Sun Microsystems, which was in many ways a company built on open source, built on open source Unix out of Berkeley, and then itself built a lot of open source critical open source. And then of course was the dot in.com, which of course the Internet was all built on open source. And Vinod has been lobbying to ban open source AI. And by the way, he denies that hes been doing this, but I saw him do it with my own eyes when the US congressional China Committee came to Stanford. I was in the meeting where he was with 20 or 30 congressmen lobbying actually for this.

And so ive seen him do it myself. And look, hes got a big stake in OpenAI. Maybe its financial self interest, by the way. Maybe hes a true believer in the dangers. But in any event, I think he proved on Twitter he was not a true believer in the dangers.

Ben Horowitz

Ill get into that. Ill explain that. Yeah, I mean, even within little tech, even within the startup world, we are not uniform in this. And I think that's extremely dangerous. Open source.

Marc Andreessen

What is open source software? Open source software, it is quite literally the technological equivalent of free speech, which means it's the technological equivalent of free thought. And it is the way that the software industry has developed to be able to build many of the most critical components of the modern technological world. Then, Ben, as you said earlier, to be able to secure those and to be able to have those actually be safe and reliable and then to have the transparency that we've talked about so that you know how they work and how they're making decisions. And then to your last point, also so that you can customize AI in many different environments so you don't end up with a world where you just have one or a couple a eyes, but you actually have like a diversity of AI's with lots of different points of view and lots of different capabilities.

And so the open source fight is actually at the core of this, and of course the reason why the sort of, sort of people with an eye towards monopoly or cartel want to ban this is open source is a tremendous threat to monopoly or cartel in many ways is a guarantee that monopoly or cartel can't last. But it is absolutely 100% required for the furtherance of, number one, a vibrant private sector, number two, a vibrant startup sector, and then right back to the academia point. Without open source, then at that point university college kids are just not going to be able to, they're not even going to be able to learn how the technology works. They're just going to be completely boxed out. And so a world where open source is banned is bad on so many fronts.

It's just incredible anybody's advocating for it, but it needs to be, I think this needs to be recognized as the threat that it is. Yeah. And on the note, it was such a funny dialogue between you and he. So like, I'll just give a quick summary of it. Basically, you know, he was arguing for closed source, you for open source.

Ben Horowitz

His core argument was this is the Manhattan Project and therefore we can't let anybody know the secrets. And you countered that by saying well this is in fact the Manhattan project then is like, you know, as the OpenAI team locked in a remote location, do they screen all their employees very, very carefully? Is it airlocked? Is there super high security? Of course none of that is close to true.

In fact, Im quite sure they have chinese nationals working there. Probably some are spies for the chinese government. Theres no any kind of strong security at OpenAI or at Google or at any of these places anywhere near the Manhattan project, which is where they built a whole city that nobody knew about so they couldnt get into it. And once you caught him in that, he said nothing and then he says back, well it costs billions of dollars to train these models. You just want to give that away.

Is that good? Is that good economics? That was his final counterpoint to you, which basically said Im trying to preserve a monopoly here, what are you doing? Im an investor. And I think that's true for all these arguments.

Marc Andreessen

Well the kicker, you know the kicker bend of that story, the kicker to that is three days later the Justice Department indicted a chinese national Google employee. Yeah. Who stole Google's next generation AI chip designs, which is quite literally the family tools for an AI program. It's, you know, it's the equivalent of stealing the, you know, if you stretch the metaphor, the equivalent of stealing the design for the bomb. And that Google employee took that ship, designed them and took them to China.

And by definition, you know, by definition that means the chinese government, because there's no distinction in China between the private sector and the government. It's an integrated thing. The government owns and controls everything. And so 100% guaranteed that that went straight to the chinese government, chinese military and Google. Google, which Google has a big information security team and all the rest of it.

Google did not realize, according to the indictment, Google did not realize that that engineer had been in China for six months. Yeah, amazing. Well, hold on. It gets better. It gets better.

Ben Horowitz

This is the same Google with the same CEO who refused to sell Google proprietary AI technology to the US Department of Defense. So they're supplying China with AI and not supplying the US, which just goes back to, look, if it's not open source, we're never going to compete. We've lost the future of the world right here. Which is why it's the single most important AI issue, for sure. Yeah, and you're not going to lock this stuff up.

Marc Andreessen

You're not going to lock it up. Nobody's locking it up. It's not locked up. These companies are security swiss cheese, and you're not going to. You'd have a debate about the tactical relevance of chip embargoes and so forth.

But the horse has left the barn on this, not least because these companies are without a doubt riddled with foreign assets and they're very easy to penetrate. And so we just have to be, like, I would say, very realistic about the actual state of play here. And we have to play in reality and we have to play in reality, we have to win in reality and the witness. We need innovation, we need competition, we need free thought, we need free speech, we need to embrace the virtues of our system and not shut ourselves down in the face of the conflicts that are coming. Another one.

Andreas asks, why are us VC's so much more engaged in politics and policy than their global counterparts? And I really appreciate that question because it basically, like, if that's the question, then it means that, boy, the VC's outside the US must not be engaged at all because us VC's are barely engaged. And then what do you believe the impact of this is on both the VC ecosystem and society in general? And then related, directly related question Vincent asks, are european AI companies becoming less interesting investment targets for us based VC's due to the strict and predictably unpredictable regulatory landscape in Europe? Would you advise early stage european AI companies to consider relocating to the US as a result?

Ben Horowitz

Great question. Well, look, I think that it kind of goes back to a little of what you said earlier, which is in startup world, like there's in the west, there's the United States, and then there's everywhere else. And the United States is kind of bigger than everywhere else combined. And so it's natural. And look, in these kind of political things, it kind of starts with the leader, and us is the leader in VC.

We feel like were the leaders in us vC. So we need to go to Washington. Until we go, nobody's going. And so that's a lot of the reason why we started things on. Well, on european regulatory policy.

Look, I think generally regulatory policy is going to, is likely to dictate where you can build these companies. We've seen some interesting things. France turns out to be leading a revolution in Europe on AI regulatory, where they're basically telling the EU to pound sand and large reason, because they have a company there, mistral, and it's a national jewel for the country and they don't want to give it up because the EU has some crazy safety ism thing going on there. Yeah. And I would also note France also, of course, is playing the same role with nuclear policy in Europe.

They're the one country, they're the cleanest country, probably one of the cleanest countries in the world as a result. Right. But have been staunchly pro nuclear and trying to hold off, I think, in a lot of ways, sort of attempts throughout the rest of Europe, and especially from Germany, to basically bad nuclear, civilian nuclear power. Yeah. And the UK is sort of been flip flopping on AI policy and we'll see where they come out.

And Brussels has been ridiculous as they've been on almost everything. Yeah. The big thing I think I note here is there's a really big philosophical distinction. I think it's rooted actually in the difference between traditionally it's been called, I think, the sort of anglo or anglo american kind of approach to law and then the continental european approach. And I forget these, it's like, I forget the terms for it, but the.

Legal, there's like common law and then. Yeah, civil, I think it's civil law. So it's basically, the difference basically is that which is not outlawed is legal or that which is specifically not legal is legal. And anything that's not explicitly legal is outlawed. Right.

Marc Andreessen

In other words, like by default, do you have freedom and then you impose the law to have constraints, or by default, do you have no ability to do anything, and then the law enables you to do things. And these are sort of, this is like a fundamental, like, philosophical, legal, you know, political distinction. And then this, this shows up in a lot of these policy issues with this idea called the precautionary principle, which is sort of the rewarding of the sort of traditional european approach, which is basically the precautionary principle says new technologies should not be allowed to be fielded until they are proven to be harmless. Right. And, of course, the precautionary principle very specifically is sort of a, sort of a hallmark of the european approach to regulation.

And increasingly, you know, by the US approach and specifically its origin, it was actually sort of described in that way and given that name, actually, by the german greens in the 1970s as a means to ban civilian nuclear power, by the way, with just catastrophic results. And we could spend a lot of time on that. But I think everybody at this point agrees, like, including the Germans, increasingly agreed that was a big mistake. That, among other things, has led to basically Europe funding Russia's invasion of Ukraine through the need for imported energy because they keep shutting down their nuclear plants. And so just like sort of a catastrophic decision.

But the precautionary principle has become, like, I would say, extremely trendy. Like, it's one of these things. Like, it sounds great, right? It's like, well, why would you possibly, why would you want anything to be released in the world if it's not proved to be harmless? Like, how can you possibly be in support of anything that's going to cause harm?

But the obvious problem with that is with that principle, you could have never deployed technologies such as fire, electric power, internal combustion engines, cars, airplanes, the computer. Right? Like, every single piece of technology we have that powers modern day civilization has some way in which it can be used to hurt people, every single one. Technologies are double edged swords. There are things, you know, you can use fire to protect your village or to attack the neighboring village.

Like, these things can be used in both different ways. And so basically, if we had applied the precautionary principle historically, we would not have, you know, we would still be living in mud huts and we would be just, like, absolutely miserable. And so the idea of imposing the precautionary principle today, if you're coming from, like, an anglo american kind of perspective or from a freedom to innovate perspective, that's just like, that's just, like, incredibly horrifying. The, you know, should basically guaranteed to stall out progress. You know, this is very much the mentality of the EU bureaucrats in particular, and this is the mentality behind a lot of their recent legislation on technology issues.

France does seem to be the main counterweight against this in Europe. You know, Ben, to your point, like, the UK has been a counterweight in some areas, but Kay also has, like, I would say they've received a full dose of this programming. Yeah, they have that tendency. Yeah. And they've been in AI in particular.

I think they've been on the wrong side of that, which hopefully they'll reconsider. So again, this is one of these things like this. This is a really, really important issue. And just the surface level thing of like, okay, this technology might be able to be used for some harmful purpose if that is allowed to be the end of the discussion. Like, we are never going to, nothing new is ever going to happen in the world.

Like, that will cause us all ultimately to stall out completely. And then if we stall out, that will, over time, lead to regression. And literally this is happening. The power is going out. German society, german industrial companies are shutting down because they can't afford the power that's resulted from the imposition of this policy in the energy sector.

This is a very, very, very important thing. I think the EU bureaucracy is lost on this. And so I think it's going to be up to the individual countries to directly confront this if they want to anyway. So I really applaud what France has done and I hope more european countries join them in kind of being on the right side on this. Yeah, yeah.

Ben Horowitz

It always is funny to me to hear the EU and like the economists and these kinds of things say, oh, the EU may not be the leader in innovation, but we're the leaders in regulation. And I'm like, well, you realize those go together. They're like, one is a function of the other. Okay, good. So, and then let's do one more global question.

Marc Andreessen

Lap Gong Leong asks, are there any other countries that could be receptive to techno optimism? For example, could Britain, Argentina or Japan be ideal targets for our message and mission? Yes. So we'll look for work on that in Britain. And look, we've got some pretty good reception from the UK government.

Ben Horowitz

There's a lot of very, very smart people there. We're working with them tightly on their AI and crypto efforts, and we're hoping that's the case. Japan is, having spent a lot of time there. They've obviously shown that capability, you know, over time. And then, you know, there's a lot about the way japanese society works that that holds them back from that at times as well.

You know, without getting into all the specifics, there's you know, they have a very, I would just say unusual and unique culture that has a great deference for the old way of doing things, which sometimes makes it hard to kind of promote the new way of doing things. I also think, you know, around the world, you know, the Middle east is very, very kind of subject and kind of on board with techno optimism. The UAE, Saudi, Israel, of course, many countries out there are very excited about these kinds of ideas and taking the world forward and like, you know, just creating a better world through technology, which I think that, look, with our population growth, if we don't have better world through technology, we're going to have a worse world without technology. I think that's, like, very obvious. So it's a very compelling message.

And, oh, by the way, South America, I should say also there are a lot of countries who are really embracing techno optimism now in South America, and that's, you know, and some great new leadership there that's pushing that. Yeah. I would also say, if you look at the polling on this, what I think you find is what you could describe is the younger countries are more enthusiastic about technology. And I don't mean younger here literally of like, they reform, but I mean, two things. One is how recently theyve kind of emerged into what we would consider to be modernity.

Marc Andreessen

And so, for example, to embrace concepts like democracy or free market capitalism or innovation generally, or global trade and so forth. And then the other is just quite simply the number of the demographics, the countries with a large number of people. And those are often, by the way, the same countries. Right. They have the reverse demographic pyramid we have, where they actually have a lot of young people.

And young people are both, you know, young people both need economic opportunity and are very fired up about new ideas. Yeah. By the way, this is true in Africa as well. In many african countries, you know, Nigeria, Rwanda, Ghana, they were there. Techno optimism, I think, is taking hold in a real way.

Ben Horowitz

You know, some of them need governance improvements, but they definitely also have young populations. Saudi thinks 70% of the population is under 30. So just to your point that the very, very, very hopeful in those areas. Ben Gulcher asks, do you think the lobbying efforts by good faith american crypto firms will be able to move the needle politically in the next few years? What areas make you optimistic as it relates to american crypto regulation?

Marc Andreessen

Crypto blockchain, web three? Yeah. So I think that, I hope I'm as hopeful as I've ever been. So there's a bunch of things that have been really positive. First, of all, the SEC has lost, I think, five cases in a row.

Ben Horowitz

So, you know, like, some of their, like, arbitrary enforcement of things that aren't laws is not working. Secondly, you know, there was a bill that passed through the House or the House Financial Services Committee, which is a very, I would say, good bill on crypto regulation. And, you know, hopefully that will eventually pass the House and the Senate. There's, you know, we've seen Wyoming, I think, adoption, really good new laws around Daos. And so there's some progress there.

And then there's been, we've been working really, really hard to educate members of Congress and the administration on kind of the value of the technology. There are strong opponents to it, you know, as I mentioned earlier. And, you know, that's, that continues to be worrisome, but I think we're making great progress. And the fair shake PAC has done just a tremendous job of kind of backing pro crip candidates and with great success. There were six different races on Super Tuesday that they backed and all ₩6.

So, you know, another good sign. Fantastic. I hit a couple other topics here quickly to get under the wire. So father Time asks, can you give us your thoughts on the recent tick tock legislation if passed, what does this mean for big tech going forward? And so I'll just, let me give a quick swing at that.

Marc Andreessen

So the tick tock legislation being proposed by the US Congress and currently being taken up in the Senate, which, by the way, and the president Biden has already said he'll sign it if the Senate and the House pass it. This is legislation that would require a, require a divestment of tick Tock from its chinese parent company, Bytedance. And so Tick Tock would have to be a purely american company or would have to be owned by a purely american company. And then failing that, it would be a ban of TikTok in the US. This bill is a great example of the sort of bipartisan dynamic in DC right now on the topic of China, which is this bill is being enthusiastically supported by the majority of politicians on both sides of the aisle.

I think it passed out of its committee like 50 to zero, which is basically, it's impossible to get anybody in DC to agree on anything right now except basically this. So this is like super bipartisan. And then it's, you know, the head of that committee is a Republican, Mike Gallagher. And, you know, he immediately, and he worked in a bipartisan way with his committee members. But, you know, the democratic White House immediately endorsed his bill.

So, like, this bill has, like, serious momentum the Senate is taking up right now. They're gonna, they are likely to modify it in some way, but it seems reasonably likely to pass based on what we can see. I would say, like I said, overwhelmingly bipartisan support. And look, the argument for the ban is, I would say a couple of different ways. The divestment of the ban, number one, is just like an app on Americans on the phones of a large percent of Americans, with the surveillance and potential propaganda kind of aspects of that certainly has people in Washington concerned.

And then, quite frankly, there's an underlying industrial dynamic, which is the US Internet companies can't operate in China. So there's sort of an unfair asymmetry underneath this that really undercuts, I think, a lot of the arguments for Bytedance. It has been striking to see that there are actually opponents of this bill who have emerged, and I would describe on sort of the further to the right and further to the left in their respective parties. And those folks, and I won't go through detail, but those folks make a variety of arguments. One of the, and let me characterize the surface level.

I think further on the left, I think that there are people who think that especially kind of further left congresspeople who feel like TikTok is actually a really important and vital messaging system for them to be able to use with their constituents, who tend to be younger, they're very Internet centric. And so there's that, which is interesting. But then further on the right, there is a lot, and our friend David Sachs, for example, might be an example of this. There are a fair number of people who are very worried that the US government is so prone to abuse any regulatory capability with respect to tech, and especially with respect to censorship, that basically, if you hand the US government any new regulatory authority or legal authority at all to come down on tech, it will inevitably be used not just against the chinese company, but it will also then be used against the american companies. And so there's some drama that's surfacing around this, and we'll see whether the opponents can pull it through.

Look, quite frankly, without coming down particularly, I think there's one of those cases where there's actually excellent arguments on all three sides. I think there are very legitimate questions here. And so I think it's great that the issue's being confronted, but I think it's also great that the arguments have been surfaced and that we're going to hopefully figure out the right thing to do. A couple of closing things close. Let's see, hopefully a semi optimistic note so John Potter asks, how do you most effectively find common ground with groups and interests that you benefit from working with but with which you are usually opposed ideologically or otherwise?

Ben Horowitz

Yeah, I mean, I think this is, you know, there's this term in Washington, common ground. And I think that, you know, you always want to start by finding the common ground, because I'll tell you something, in politics generally is most people have the same intention, you know, like in Washington. In fact, people want life to be fair. You know, they want, they don't want people to go hungry. They want citizens to be safe but have plenty of opportunity.

So theres a lot of common ground that the differences lie not in the intent, but how you get there. Like, what is the right policy to achieve the goal? And so I think its always important to start with the goal and then kind of work our way through why we think our policy position is correct. We don't really have a lot of disagreements on stated intent. At least.

I think there are some intentions that are very difficult in Washington.

The intention to control the financial system from the government or nationalize the banks or achieve the equivalent of nationalizing the banks is, you know, when you have that intent, that's tough. But, like, if you start with, you know, most intentions are, I think, you know, shared between, you know, us and policymakers on both sides. And then now we'll close on this great question. Zach asks, would either of you ever consider running for office? And for fun, what would be your platform?

So I want, just because, look, you know, I think being a politician requires a certain kind of skill set and attitude and energy from certain things that I don't possess, unfortunately. Do you have a platform you would run on if you did run?

Okay, yeah. Let's hear your platform. The american dream. So I won't do it now, but I like to put up this chart that shows the change in prices in different sectors of the economy over time. And what you basically see is the price of, like, television sets and software and video games are, like, crashing hard, right.

Marc Andreessen

In a way that's, like, great for consumers. You know, like, I saw 75 inch flat screen, ultra high depth tvs now are down below dollar 500. Like, you know, it's great. It's amazing. Like, when technology is allowed to work its magic, like, prices crash in a way that's just great for consumers.

And it's equivalent of a giant, basically, you know, when prices drop, it's equivalent of a raise. So it makes, makes human welfare a lot better. The three elements of the economy that are central to the american dream are healthcare, education, and how right. And so if you think about what does it mean to have the american dream? It means to be able to buy and own a home.

It means being able to send your kids to great schools, get a great education, to have a great life, and then it means, you know, great healthcare, to be able to take care of yourself and your family. The prices on those are skyrocketing. They're just like straight to the moon. And of course, those are the sectors that are the most controlled by the government, where there's the most subsidies for demand from the government, there's the most restrictions on supply from the government, and there is the most interference with the ability to field technology and startups. And the result is we have an entire generation of kids who basically, I think, are quite rational and looking forward and basically saying, I'm never going to be able to achieve the american dream.

I'm never going to be able to own a home. I'm never going to be able to get a good education or send my kids to a good education. I'm not going to be able to get good healthcare. Basically, I'm not going to be able to live the life that my parents live or my grandparents live, and I'm not going to be able to fundamentally form a family, provide for my kids. I think that's, in my opinion, that's the underlying theme to kind of what has gone wrong sort of socially, politically, psychologically in the country.

That's what's led to this sort of intense level of pessimism. That's what's led to sort of this attraction, kind of very zero sum politics to recrimination over optimism and building. And so I would confront that absolutely directly. And then, of course, I would point out that I don't think anybody in Washington is doing that right now.

Either I would win because I'm the only one saying it out loud, or I would lose because nobody cares. But I think it would. I've always wondered whether that actually, both on the substance and on the message, whether that would be the right platform. Yeah, no, it would certainly be the thing to do. That's the thing.

Ben Horowitz

It's very complex in that healthcare policy is largely national, but education policy and housing policy has also got a very large local component. So it'd be a complex, complicated set of policies that you'd have to enforce. We still have a ton of questions, so we may do part two on this at some point, but we really appreciate your time and attention, and we will see you soon. Okay? Thank you.

Marc Andreessen

Thank you.

Ben Horowitz

Thank you.