20VC: Scale's Alex Wang on Why Data Not Compute is the Bottleneck to Foundation Model Performance, Why AI is the Greatest Military Asset Ever, Is China Really Two Years Behind the US in AI and Why the CCPs Industrial Approach is Better than Anyone Else's

Primary Topic

This episode explores the pivotal role of data over computing power in AI development, the strategic military potential of AI, and the comparative AI advancements of the US and China.

Episode Summary

In an engaging discussion, Alex Wang, CEO of Scale AI, and host Harry Stebbings delve deep into the intricacies of AI development, emphasizing that data, not computing power, is the current bottleneck. Wang discusses the strategic military advantages of AI, suggesting it could be more significant than nuclear weapons if leveraged by superpowers like China or Russia. The conversation also covers the aggressive industrial policies of the Chinese Communist Party (CCP), which may enable China to surpass the US in AI capabilities. Wang advocates for a balanced approach to AI development, highlighting the need for enhanced data generation and management to drive future AI models beyond current capabilities.

Main Takeaways

  1. Data is now more critical than computing power in advancing AI technologies.
  2. AI has the potential to become the most significant military asset, surpassing even nuclear weapons in strategic importance.
  3. China's centralized approach to AI development could potentially allow it to overtake the US in the AI race.
  4. The episode discusses the concept of "frontier data," essential for the next AI breakthroughs, which involves complex reasoning and problem-solving capabilities beyond current models.
  5. Wang stresses the importance of developing AI in a manner that maintains ethical standards and strategic control, especially considering the potential military applications.

Episode Chapters

1: Introduction to AI's Bottlenecks

Alex Wang discusses the limitations in AI development, pointing out that despite increases in computational power, the real challenge now lies in acquiring quality data to train more advanced models.

  • Alex Wang: "The real struggle in AI development is now data acquisition, not just computational power."

2: AI as a Military Asset

The conversation shifts to the potential military applications of AI, where Wang posits that AI could be a greater asset than nuclear weapons if developed and utilized strategically.

  • Alex Wang: "AI could potentially be the greatest military asset, potentially surpassing the strategic value of nuclear weapons."

3: China's AI Capabilities

Discussion on how China's government-driven approach to AI might allow it to leapfrog the US in AI development within the next few years.

  • Alex Wang: "China's centralized, aggressive action in AI development could enable it to surpass the US shortly."

4: The Need for Frontier Data

Wang introduces the concept of "frontier data," which involves complex data sets that are essential for training next-generation AI models capable of sophisticated reasoning and problem-solving.

  • Alex Wang: "We need to focus on generating and managing frontier data to push AI capabilities further."

Actionable Advice

  • Understand the importance of data in AI: Focus on generating high-quality, complex data sets to train AI models.
  • Stay informed about AI military applications: Keep abreast of developments to understand the broader implications of AI in defense.
  • Evaluate AI strategies: Whether in business or policy, consider how AI strategies can be developed to maintain competitive advantage and ethical standards.
  • Invest in AI education: Encourage learning and development in AI to foster a more informed workforce and public.
  • Monitor international AI developments: Pay attention to global AI advancements, particularly in countries like China, to gauge competitive and cooperative opportunities.

About This Episode

Alex Wang is the Founder and CEO @ Scale.ai, the company that allows you to make the best models with the best data. To date, Alex has raised $1.6BN for the company with a last reported valuation of $14BN earlier this year. Scale tripled their ARR in 2023 and is expected to hit $1.4BN in ARR by the end of 2024. Their investors include Accel, Index, Thrive, Founders Fund, Meta and Nvidia to name a few.

People

Alex Wang, Harry Stebbings

Companies

Scale AI

Books

None

Guest Name(s):

Alex Wang

Content Warnings:

None

Transcript

Alex Wang
At its core, this AI technology has the potential to be one of the greatest military assets that humanity has ever seen, potentially even more of a military asset than nukes. Let's say China or Russia had AGI today and the United States didn't. I would imagine they would use that to conquer. The CCP's system is incredibly good at taking very aggressive centralized action and centralized industrial policy to drive forward critical industries. They have a clear shot at racing forward.

Harry Stebbings
This is 20 vc with me, Harry stabbings and my word. What a show we have in store for you today. This was such a special one to do live in the studio as we. Welcome Alex Wang, founder and CEO at Scale AI, the company that trebled revenue in 2023 and is expected to finish 2024 with 1.4 billion in ARR. Earlier this year, they raised $1 billion at a reported $14 billion valuation.

Unknown
And this show was immense, really one. Of my favorite shows to record in recent times. And so let me know what you think and you can watch the full show from the studio on YouTube by searching for 20 vc. That's 20 vc. But before we dive in, let's face it, your employees probably hate your procurement process.

It's hard to follow, it's cobbled together across systems and it's a waste of valuable time and resources. And as a result, you probably are facing difficulties getting full visibility, managing compliance and controlling spend. It's time for a better way. Meet Zip, the first modern intake to pay solution that can handle procurement and all of its complexities, from intake and sourcing to contracting purchase orders and payments. By providing a single front door for employee purchases, Zip seamlessly orchestrates the procurement process across systems and teams, meaning you can procure faster with the least amount of risk and get the best spend ROI for your business.

With over 4.4 billion in savings for our customers, Zip is the go to solution procurement for enterprise and industry disruptors like Snowflake, Discover, Lyft and Reddit. Finally, a solution employees love to use where buying things for work just works. Get started today@ziphq.com. twentyVC and speaking of game changer, with Zip, we have to talk about Cooley, the global law firm built around startups and venture capital. Since forming the first venture fund in Silicon Valley, Cooley has formed more venture capital funds than any other law firm in the world.

With 60 plus years working with VC's, they help VC's form and manage funds, make investments, and handle the myriad issues that arise through a funds lifetime. We use them at 20 VC and have loved working with their teams in the US, London and Asia over the last few years. So to learn more about the number one most active law firm representing VC backed companies going public, head over to Cooley.com and also cooligo.com comma Cooleys award winning free legal resource for entrepreneurs. And finally, travel and expense are never associated with cost savings. But now you can reduce costs up to 30% and actually reward your employees.

How? Well, Navan rewards your employees with personal travel credit every time they save their company money when booking business travel under company policy, does that sound too good to be true? Well, Navan is so confident, youll move to their game changing all in one travel corporate card, an expense super app that theyll give you $250 in personal travel credit just for taking a quick demo. Check them out now@navan.com. 20 VC, you have now arrived at your destination.

Harry Stebbings
Alex, I am thrilled that we could do this in person. Thank you so much for joining me today. Yeah, great to be here. Now, listen, it's funny, I told you, I tweeted before, like, we should skip the founding stories because there are many, many great times you've told it before, but I want to dive straight in and I want to ask you the question of when we look at model performance today. Let's just start high level.

Do you think we're seeing a case of diminishing returns where more compute doesn't lead to better performance? Yeah, I think it's pretty fascinating. I mean, I think there's been this, especially coming up now, where OpenAI has had GPD for, since fall of 2022. And since that timeframe, we haven't yet seen a new base model or a new model that's jaw droppingly better than GPd four. You know, we haven't seen the GPT 4.5 or the GPD five or the other labs haven't yet come out with models that are leagues and leagues better than GPT four, despite way more compute expenditure.

Alex Wang
Since when chat CPT came out, you know, you can look at the graph of Nvidia's revenue and it just inflects. It's just like, it just goes straight up after GPT four came out. And it goes from, I think, the Nvidia's data center revenue. They were doing roughly about 5 billion a quarter. And then it shoots up to now it's north of $20 billion a quarter.

So there's been tens of billions going to more than 100 billion of spend on high end Nvidia. GPU's all in the same timeframe. We haven't yet seen the big breakthrough since GPD four, which actually, that model came out before this huge inflection in Nvidia expenditure. So overall, it's this interesting thing where we're seeing investment into compute go up dramatically, go up exponentially right now. But we're still, I think, as a community, as an industry, kind of waiting for the next great model.

Harry Stebbings
So do you think we've reached this kind of asymptote of performance, where actually we'll see this kind of plateauing in performance while we wait for that? And do we think that's like a monthly thing, or do we think that's kind of like self driving? Remember, with self driving, we saw kind of the plateau in performance, actually, for kind of several years. And actually, it was only recently where we see that inflect. Again, it's kind of this interesting thing.

Alex Wang
So there's three ingredients that go into these AI models, or three pillars. So there's compute, of course there's data, and there's the algorithms. The history of AI is that progress comes from sort of all three of these pillars sort of being built altogether. You certainly need a lot of computational capability, but you need the algorithmic advances, like the transformer originally, or RLHF, or, you know, whatever future algorithmic advances come, and then you need the data pillar to support it as well. And I think a lot of the plateau that we've recently seen can almost be explained at a very high level from hitting kind of a data wall.

The GPD four was a model basically trained on nearly all of the Internet and using a huge amount of computational capabilities. And a lot of, I think what the industry has been doing over the past few years is scaling the computation dramatically, but not necessarily by building up the other two pillars in tandem. So there needs to be, I think, a combination of more algorithmic improvement, but in particular, we need to ensure that there's more data to support it. When you say data wall, what is the data wall, and what can we do to enable our overcoming of it? Yeah, so at a super high level, we've used up all the easy data.

We've used up all of the Internet data, the common crawl and newer versions of the common crawl. Just so we understand, easy data is stuff on social media, anything not behind paywalls, anything that's easy and free to crawl. Anything that's easy and free to crawl, or stuff that can be torrented. There's a lot of reports that there's a lot torn to data in some of these models, or basically anything that is already written down and easy to get from the open Internet. And then the first stage of a lot of this AI improvement has been these advances in pre training, which is basically training these models to be really, really good at emulating the Internet.

And right now, we're at a point where these models are exceptionally good at emulating the Internet. They're better than any human at emulating the Internet. But the problem is, when we think of AGI, when we think of powerful AI systems, we want much more than just emulating the Internet. You want AI systems that can do tasks, you want AI systems that can solve difficult problems. You want AI systems that humans can collaborate with to solve all their daily problems.

This process of building agents and AI models that are capable of all these things, we're not going to get there from Internet data. And we've already used up all the Internet data. Why are we not going to get there from Internet data? When we think about effective agents and when we think about software doing the work, not just selling the tools, as I think Sarah Tanville said quite well before, why is existing data not equipped to do that transition from tools to work? The simple answer is, like, a lot of the thought process and a lot of the thinking that humans go through when they are doing more complex tasks that doesn't get written down on the Internet.

So, for example, if I'm a fraud analyst inside a large bank, my job is like understanding based on a set of transactions that seem suspicious, whether or not it's a fraudulent transaction. And I need to analyze all sorts of different pieces of data and use my deduction and use all my human intelligence to make that decision, that process that I go through, it's not like I'm writing down step by step, like, oh, I looked at this piece of data, and I looked at this piece of data, and then based on that, I deduce this, and I'm not writing all that down on the Internet to later be crawled by these models. One way to think about is, like, all of the reasoning and thinking that is powering the economy today. None of that gets written down on the Internet. And so if you just train on the Internet, the model has no ability to learn from all of that.

Harry Stebbings
So how do we codify and capture the data that's not codified already, as you said there with the fraud analyst, the thought process, the analysis, the discussion that goes on in internal meetings, that's not codified in datasets, how do we capture that to enable us to do the work. What I really believe is what we need from now forward is frontier data. We need to basically have data abundance of frontier data, where right now we're in a sort of data scarcity mindset, or we're hitting a data wall. And this frontier data is exactly what we're talking about. Frontier data, in my mind, is complex reasoning chains, complex discussion agent chains of models going and looking up a piece of data, doing some reasoning, looking up another piece of data, maybe correcting if it has an error tool, use all of the key components that we would think of an agent being able to do.

Alex Wang
That all needs to be encapsulated into the frontier data to power the forward capabilities of these models. How do we capture that data? There's basically three pillars. So first is there's a lot of this data that's locked up in the world's enterprises today, and none of that gets on the Internet for very good reasons. But just to give a sense of scale, JPMorgan's proprietary internal dataset is 150 petabytes.

GPT four was trained on an Internet dataset that was less than one petabyte. So the amount of data that exists inside large enterprises is just absolutely astronomical. So there's one process of just mining all this existing enterprise data for all the goodness that exists within it. But you would never get that open sourced, would you? So this is all proprietary and then delivered custom to that customer.

Exactly. This has to be a process of, like every enterprise. Like, I have a set of very important problems for my enterprise, then I need to go through the process of, like, basically mining all my existing data and refining all that existing data for use for AI systems to solve my own problems. When we think about breakthroughs, we said about diminishing returns at the beginning. I spoke to one of the most powerful ctos in the world the other day, and they said the real breakthrough in this question of are we reaching diminishing returns, is whether we can really solve reasoning.

Harry Stebbings
How do you think about our ability to solve reasoning and that impact of data that you mentioned there in helping us navigate that? Yeah, I actually think that if you look at what these models can do, they're very good at reasoning in situations where they've seen a lot of data before. You know, I think we like to think about these AI as if they're like little human intelligences, but they're very different. Human intelligence and machine intelligence are very different. Humans have a very general form of intelligence.

Alex Wang
If a kid is raised in a very small neighborhood, they can live their whole lives in that small neighborhood, and they can go to an entirely different part of the world, and they can navigate, understand what's going on. No AI system today would be able to do that level of sort of drag and drop in one situation to another situation and figure out what's going on. I think we have to be cognizant. That's a limitation. But what that means is that for any situation that we want these models to perform well in, we need to have data of that situation or that scenario, and actually the model will perform really well.

So there's kind of two ways to think about resolving the reasoning gap that exists in these current models. One is obviously, you build some sort of general reasoning capability, which would definitely be a big breakthrough. The other one is just, it's a data problem. It's like you need data for every scenario where you want these models to reason well in. You just need to overwhelm them with data in all those scenarios, and you're going to get models that can reason really well.

Harry Stebbings
How do we move from an environment of data scarcity to data abundance when we appreciate the immense amounts of data that, say, JP Morgan or Goldman Sachs or any large enterprise has, but also the proprietary nature of that, which won't actually go to generalized models, which will help the world or humanity or any of these breakthroughs actually occur to everyone else? How do we move from that data scarcity to data abundance? Is it synthetic data that we're creating? How do we think about that? The second part is, to your point, new data that has to be produced.

Alex Wang
We need the means of production of new frontier data to get us from GPT four to GPT ten. I think when we think about chips, this is very natural, which is, oh, yeah, we need to build more and more fabs. We need to build bigger fabs. We need to increase the resolution and get lower, lower nanometer fabs for compute. It's very natural for us to think about increasing the means of production.

But I think we don't think about this with data, and I think we need to do something very similar. And this process of producing data, it's sort of a hybrid human synthetic process. And that's really how we think about it, which is you need algorithms that can do a lot of the heavy lifting in producing synthetic data, but you need human experts who are going to be able to guide the AI systems and basically help provide input as to, you know, when the AI system gets stuck, or when they have a factuality issue, or when it's in a situation where it hasn't encountered before. A lot of autonomous vehicle scale up has been through these safety drivers. You have safety drivers inside the car, and when the car starts screwing up, you have the safety driver disengage and sort of take over.

And you need that kind of setup for these AI systems. You need AI models to be generating large amounts of data, and then humans who can kind of take over and nudge the models when necessary to make sure that you get really high quality data. What does that look like in the structure of organization state? Do we create new roles for these AI savers? Yeah.

Yeah. Trainers is one term. AI trainers or contributors is another term. For what it's worth, I think this process of contributing data to AI is actually one of the highest leverage jobs that humans can have. And the reason for that is, let's say I'm a mathematician, I can either go into my hole and do pure math and try to do pure math research.

That's one trajectory for my life. The other trajectory is I use all my skills and talents and intelligence to help make these AI models smarter. Let's say I make GPT four just like a little bit smarter on math. If I take that little bit of improvement of the model, and I sum that up across all the times that GPT four is going to be called and used, across every math student who's going to use GPT four, every company that's going to use GPT four, every developer that's going to use GPT four, that's a huge amount of impact. And so as a human expert, you have the ability to have society wide impact by producing data to help improve these models.

What we see is, for scientists, mathematicians, doctors, human experts in the world, it's an incredibly exciting proposition to be able to, I can transmit my capabilities, intelligence, training, all of that into a model that's going to be able to have society wide impact. I mean, it's an incredibly exciting proposition. How do we think about the structure of data? Often people talk about the biggest challenge in kind of data governance is actually just like the structure and cleanliness of it. When we look at the 150 petabytes of JPMorgan data, I have no idea, but I presume it's not structured perfectly for a lot of models to ingest efficiently.

Harry Stebbings
How do we think about the structuring of this huge data set that I'm sure all large enterprises have and the challenge that that poses? So, again, I think this is a case where there's two parallel efforts. One is mining existing data, which by all means is going to be a one time hit. There's going to be a one time benefit that you get from mining or existing data, and it could be really meaningful. Do you think in five years time, everyone will have mined their largest data sources internally?

Alex Wang
I don't think everyone will, but certainly the most sophisticated companies will. And then we'll be at a point where we still need to make the models better. At the end of the day, it'll all boil down to data production. What are the means of forward production? In the same way that you need the means of forward production for chips and all the other things that you care about.

Harry Stebbings
Okay, so we have that in terms of mining existing. You said there was another form, so. There'S data mining, and then there's forward data production. These are the two core, core directions for where we need this data to come from. And I think, kind of taking a broader step back, I think that a lot of AI progress at this point is fundamentally more data bottleneck.

Alex Wang
If we were able to produce, compute and data in lockstep with one another, so as Nvidia continue to manufacture hundreds of billions dollars worth more of chips, if we were able to produce a proportional amount of data, as we got more and more chips and we were able to produce these two together, then we would get astronomically more capable. But just so I understand. So when we think about increasing the supply side of data, what is the literal ways that we can do that? What comes to my mind is actually Dan Soroka at limitless, but he basically has this new hardware device which records every single thing that you say and do, and it produces your own personal AI, because it has everything that you've ever said in the day that that is a new form of data creation. In my mind, how do we increase the supply side of data?

So there's probably two main pieces. One is like this effort from limitless or other efforts, which is basically much more longitudinal data collection, collecting more of what's naturally happening in the world. There's a bunch of forms of this. So one is like, in a workplace I think you're going to want, as creepy as it sounds, some kind of constant data collection of what apps are using, what order apps are using. Where do you copy paste one thing to another?

Harry Stebbings
You have a lot of this with RPA and a lot of Uipath flows. Yeah, exactly. Used to that. Yeah, yeah, yeah. So process mining, which is one of the terms in SAS, but basically, like the continued collection of existing enterprise processes, then there's the consumer version of that, which looks kind of what you're referencing, or maybe it's with the meta ray ban collaboration or whatever device ultimately does it, but sort of something that collects the longitudinal view of your own life.

Alex Wang
And then there has to be a real investment towards human experts collaborating with models to produce frontier data. So both of the things I referred to before, both enterprise process mining and for lack of a better term, consumer data collection, those are all going to produce valuable data sets, but they're not going to produce the data that's actually going to push the models forward. Because the push the models forward, you need really highly complex data that's going to be able to push the frontiers of what the models can do. So this is where you need the agentic behavior, this is where you need the complex reasoning chains. This is where you need advanced code data, or maybe advanced physics or biology or chemistry data.

These are the things that are really needed to push the boundaries of the models. I think this is a global kind of infrastructure level effort that needs to happen. I think we need to think about it as how do we get the world's experts to collaborate with the models to help produce AI systems that are going to be the world's best scientists or the world's best coders or mathematicians? When we think about the commoditization of the models, as everyone says that we have, how do we think about proprietary access to these data sources? People have said to me before that I don't mean to throw shape OpenAI's models are not necessarily better, they've just had better access to data, they've bought more data, whatever, whatever.

Harry Stebbings
But data being the central superiority element of why they had better performance in the past. Will we see one model get access that others don't? How do we think about fair and equitable access to data from the model side? Yeah, well, I actually think, to your point, if you think about the competitive playing field of these different model providers against one another, there's three pillars, right? There's algorithms, compute and data.

Alex Wang
And data, I think is actually the primary pillar that you can imagine a real durable competitive advantage emerging. So if you think about where are their moats in this LLM race, or where are their moats in this foundation model game, I think data is one of the few areas where you can produce a sustainable moat, because the issue is algorithms. That's ip that at some point the rest of the industry will learn about. You can have more compute than other people, but other people can just spend more money and buy that. Compute and data is one of the few areas where you can actually produce a long term sustainable competitive advantage.

I agree. When you look at some of OpenAI's agreements, they obviously partner with the FT to get access to all of the FT's historical library, and they've done quite a few, actually, with Axel Springer. I think that is access that a lot of other models do not have, which will make their content superior in whatever queries they have in that respect. Yeah, exactly. And I think this is the start of this form of thinking of sort of data as a moat.

You know, there's the FT, there's Atkospringer. These are the first indications of this. But in the future, these labs are going to be thinking a lot about, okay, what's the data that I'm going to use to differentiate relative to my competitors, and how am I going to produce that data, and what is the long term durable advantage created by that? I actually expect that, you know, everything. We've been talking around data, around model commoditization, we're going to see companies start building data strategies that drive more differentiation in the market over time.

I mean, another way to think about this is right now in San Francisco, researchers and the big CEO's brag about how many GPU's they have. The biggest indicator of how serious they are about AI is how many GPU's they have. But I think in the future, they're going to brag about what data they have access to, how much data they're producing, and what are their unique rights to different data sources. I think that's actually going to be the primary plane of competition in the future, versus just, okay. Jensen's giving me however many hundreds of.

Harry Stebbings
Thousands of GPU's, given data strategy being a potential element that one could win on and compete on in different ways. Do you think we will not see the commoditization of these models over time? There's two futures. One is that even data strategy becomes something that very quickly commoditizes, and different labs sort of copy one another, or they all end up converging to the. Same direction 100%, because especially with a lot of the content producers, they're not going to do exclusive agreements with one model and not other models.

Alex Wang
Yeah, different labs need to have strategies to produce their unique datasets. Let's say anthropic, for example, has focused a lot on enterprise use cases, and maybe they need to develop a data strategy that enables them to have a very differentiated access to new data to support those enterprise use cases or maybe OpenAI with chat. GPT needs to develop a unique data strategy that lets them leverage the fact that they have all these users and all this reach the various labs are going to need to lean into where they're going to be able to get proprietary and differentiated data going into the future. Do you think we're going to see a reversion back to on Prem? I'm jumping around so much, but I'm loving this conversation.

Harry Stebbings
Sorry for that. But when we think about 150 petabytes of JPMorgan data, I don't know if they're going to be like, yeah, I'll throw it all in the cloud, my most sensitive data. Will we see the reversion back to on Prem and models that work on Prem for these large enterprises? This is a super interesting question. I think when we talk to.

Thank you, Alex. I think when we talk to these large enterprises and the leaders within these enterprises, they are very quickly realizing this fact that you stumbled on, which is that their enterprise data might be their only competitive differentiator in an AI world. They're extremely, extremely cautious about. If they do a deal where all their data, somehow a model developer gets access to it or they share it in some way, then they could be mortgaging away their entire future. I think they're very, very cautious about that.

Alex Wang
This is actually why I think there's a very big opportunity for, whether it's open source models or the llama models or the Mistral models or whatnot, basically these models that can go on premises and that enterprises can take and then customize on top of their own data, and then it never has to go back to a model developer or cloud or anything like that. I think that there's a huge unmet need there, and I think that's actually where most serious enterprises are going to go towards, which is I need really, really strong guarantees that my data is not going to be used in any way to improve my competitors. I think AI services will actually create more revenue over the next five years in AI models. We saw actually Accenture come out with, I think it was $2.4 billion in revenue from generative AI, and open air was obviously at $2 billion. How do you think about.

Harry Stebbings
I'm just intrigued with scale AI today and working with some of the largest enterprises, a services component, the learning and adoption curve is challenging for large enterprises. Do you see that as a core part of your business in the next few years as we scale the education curve? First of all, I think you're right. There's so much value to be generated from AI, for sure. But then there's this natural question of where's the value capture going to be?

Alex Wang
Right? It's this fascinating thing. If you go back and read high output management by Andy Grove. There's these chapters around for intel. First we thought that this is where the value capture was going to be, but then we realized it was going to be in this other part of the stack, and so we had to migrate to that part of the stack, and then we had to migrate again.

And it's this incredible case study. I remember reading that and I read it maybe a decade ago in a different era of tech, and I was like, this is weird. This doesn't feel very relevant. And then now in AI, you're seeing it once again where it's so new and so nascent, where exactly where in the stack value will accrue. Feels like it's constantly moving.

And I agree with you, I think that the models themselves, there's so much competition there. I don't know how much value accrues at literally the model itself. But everything above the model and everything below the model, I feel very confident there will be value accruing. So for the infrastructure, I mean, Nvidia is the biggest company built on AI today. Like, they're the third most valuable company in the world.

Nvidia is more valuable. You know, their market cap is higher than meta and Google and Amazon and Saudi Aramco. I mean, it's really staggering. Like, Nvidia is an incredible, incredible company. And that's below the model.

And then above the model you have all these apps and these services that are going to be built on top of it. So I was arguing with someone this morning, actually, on the way here, though, and I was saying like, yes, okay, we have notion AI and we have, you know, box, the storage company who are going to implement AI solutions into their existing storage products. So you can extract information better. Yeah. Have you seen the numbers?

Harry Stebbings
Salesforce are now growing at single digits. Like Mongo are now growing at single digits. Point being, actually the commoditization of these features means it'll be better products for us, but I don't know if you'll get value extraction from that in the form of increased pricing. How do you feel about that? Yeah, so our thesis on this is there was this article that flew around the end of software.

Alex Wang
Right? I saw this Chris pike from Chris pike. Yeah. It was an intentionally provocative point of view, I think. So for those that haven't read it.

Harry Stebbings
What was the core premise, just so they understand it. I thought it was like a brilliant comparison. But he basically drew this comparison of software companies today to media companies pre social media. The rough comparison was in the older days of media, you had all these incredible media companies. There were these high end shops where there were all these experts producing this very differentiated content.

Alex Wang
But then it got disrupted by social media and the Internet broadly, because all of a sudden you just had the content distribution costs came down dramatically. The world of media consumption turned into this like very broad constellation where you would consume whatever media was produced by anybody that was interesting to you. And it was sort of like much more on demand versus being this sort of walled garden of large media producers. And basically this comparison to that's what's about to happen to software, which is now the enterprises live with this walled garden of some small number of software providers. And what's going to happen now with generation, all these other trends, is they're going to have this constellation of all these different apps and point solutions and this sort of portal to that constellation of various software providers.

And this sort of like, we're going to move from this current world of like, smaller number of walled garden SaaS apps to this sort of much more decentralized universe. Do you agree with that? It's intentionally provocative, right? But I think one thing that is true is I do think that enterprises and the world writ large are going to demand greater levels of customization. They're going to demand greater levels of personalization and stuff that is, that is really purpose built for their business like a glove.

The first tech company that ever did something in this direction was Palantir. You know, they got a bad rap for a long time because everyone thought that Palantir, oh, they're just a consulting company. But Palantir's point of view, which is provocative as well, was like, no, what we're going to do is we're going to go into enterprises, we're going to understand exactly what their problems are, and we're going to help them build the perfect application for them, that's built on top, that connects all their data and all that stuff. If we can do that, then we're going to build something that's far more valuable for them than what any other software provider is going to be able to produce. They did this, obviously, before generator, before all these tools that are going to make this motion a lot more feasible.

But I do think there's an element that this is like the way the world is moving, which is now, especially that the software production costs and software creation costs are going down so dramatically, we're going to end up moving towards a world where more and more of the software that enterprises consume are going to be customized and custom built and purpose built for exactly their problems. What does that mean in terms of the makeup of engineering teams of large enterprises? Do they shrink? Do they focus on different things? Do we just have teams of the world's best prompters?

Harry Stebbings
What does that mean in terms of the changing structure of engineering teams? Yeah. Well, I think software engineering in general is going to change dramatically. A lot of what developers spend a lot of time on today, they will not need to spend time on going into the future as the models get better and better at coding. But there's certainly big parts of what they do which are irreplaceable.

Alex Wang
And over time, I think that the part in particular that's very, very valuable is this sort of like general process of going from what are my customer problems or what are the sort of problems I need to solve, and translating those into engineering problems and scoped tickets almost, that can be solved by an AI engineer. Everyone says that we're going to see the end of per seat pricing. Like he said, Chris had that provocative article, but everyone talks about the end of per seat pricing. To what extent do you think we will see the end of per seat pricing in this next wave of software? And especially with the data lens where you could see a more consumption based pricing model aligned, do you think that truly takes over?

The reason that Percy pricing doesn't make sense going to the future is that at an enterprise today, certainly most of the productive work is done by their employees, done by people. But in a future where you imagine more and more of the work is done by AI agents or AI models, then perceive pricing doesn't really make sense. As a provider of software, provider of solutions, you want to make sure that you're capturing the value that you're providing to the people, but also the value that your agents or your AI systems are producing. That shifts a lot of the world towards consumption based pricing versus proceed. One of my biggest worries is obviously we're in London.

Harry Stebbings
We specialize in many things here, long lunch breaks and regulation so diminishing on London. But my question to you is, I really worry that we're going to see regulatory provisions which stifle innovation because of consumer data protection acts and just unnecessary regulation around data access. Do you think I am justified and how do you navigate the regulatory access to data question? It's a really important point. And I think that certainly what we've seen in the EU is a very restrictive approach to data.

Alex Wang
My personal belief, I don't think that more permissive regulations around data are at odds with being a liberal democracy. More sort of liberal data access provisions are in fact very compatible with being a liberal democracy. And I think that we as a society need to figure out what the right balance there is and how we sort of square the circle. But I think this is a very important question because it's almost like, I think in the United States, there's been a huge amount of effort and real regulatory effort in terms of how do we ensure we do not slow down chip production, how do we make sure that we can keep manufacturing huge amounts of chips and the US won't be disadvantaged. From that perspective, we need to take a similar lens to data.

So how do we, from a policy standpoint, both in the US and in the UK, frankly, how do we think about ensuring that as countries, we're not tying one hand behind our backs for future data production for these models? Do you think the US is currently tying one hand behind its back in terms of that? I'll put it this way. We're definitely not taking a pro data regulatory. What would a pro data regulatory stance look like?

I think there's a few things. I think first, there's large datasets that do not lend proprietary advantages to specific players that need to be sort of centralized and made accessible to whole industries. So simple examples, safety data in, let's say, aerospace, which is a, which is a hot topic, obviously, but safety data in aerospace should be collectively pooled for the purpose of advancing the entire industry forward. Or the data I mentioned before, or the example I mentioned before, fraud and compliance in financial services should be pooled together and should build forward capabilities. So I think there's like entire industrial sectors where there should be some degree of data pooling to just push forward the overall industry.

And I think what you need is, in a lot of consumer facing areas, we need to work through a lot of the existing restrictions to make sure that those don't prevent AI progress. So one great example here is actually HIPAA in healthcare and all the PII and other other limitations. Right now, HIPAA and all the PII regulations will more or less prevent patient data from being used to train AI models. But I think we can agree, as a civilization, as a human race, we really wanna learn from all of the existing medical data on how do we cure human diseases going forward. And so we need to figure out, like, how are we going to make it so that, like, there's very clear anonymization provisions or there's a.

There's a very clear and obvious way in which you can use existing patient data to improve future health outcomes. I heard, actually, that China, apparently, I can't remember who said this on the show, but they said they're like two years behind the US in terms of AI progress. I heard that and I thought that is absolute shit. And I think when you look at, like, data provisions and what the chinese government will be willing to do in terms of data access and data provisions and regulation, I think if they are two years behind, that will very quickly catch up. How do you see China being two years behind?

Harry Stebbings
And do you agree with that? Two years ago, they were probably more than two years behind when OpenAI first produced GP four in the lab. China were nowhere near that. But just even the past few months, there was a chinese company, 0101 AI, that produced a model, Yi large, Yi dash large, that is one of the best models in the world. I think it's just behind.

Alex Wang
So it's behind GPT four O and Gemini and Claude three opus, and it's the next model right behind that in the leaderboards. So it's one of the best models in the world. So we've already seen them meaningfully catch up. They are like chinese LLM and AI capabilities are, I would say right now, pretty close to neck and neck with us capabilities. And I think if you plot the path ahead based on everything we've talked about with data, they have a clear shot at racing forward and racing ahead of us.

It comes down to at its core, the CCP system is incredibly good at taking very aggressive centralized action and centralized industrial policy to drive forward critical industries. And what we've seen even in the past few years on, or the past few decades, frankly, on solar, how the CCP has been able to make, take industrial policy to the point of, like, being by and large, the world's leader in solar, and then most recently EV's, and how the CCP system and approach has been able to create very, very cheap EV's. You're seeing this pattern play out over and over again, where the CCP approach to industrial policy is not the most innovative, but once an industry has been established and it's about, you know, turning the crank, they are better at turning the crank than any other economy in the world. I totally agree with you. I saw, actually a chart, I can't remember who tweeted yesterday, but I think it was.

Harry Stebbings
I think it was either East Elon or Bill Ackman. And it would basically showed countries different kind of creation of EV providers, and it showed, like, the US. And it was like, the US. I mean, without Tesla, it would have been in the dumps because it would have only had the general Motors. But China was, like, up and to the right.

Does that worry? It worries me a lot. You know, one of the elephant in the room topics, which I think, you know, as an AI community, we rarely discuss, is that at its core, this AI technology has the potential to be one of the greatest military assets that humanity has ever seen. Let's say you had AGI and you have one country with AGI and another country without AGI. Which one will win in a war?

Alex Wang
Well, probably the one with AGI is going to figure out how to produce all the weapons, or we'll figure out a brilliant military strategy, or we'll be able to hack the other country systems. And it is potentially one of the greatest military assets that the world's ever seen. Potentially even more of a military asset than nukes. If you think about this, we're in a geopolitical environment that is increasingly tense. The amount of conflict in the world has been monotonically increasing the past few decades.

You're seeing these multiple wars being fought in the world, and some of them without very clear paths to resolution. And there are totalitarian leaders right now in the world, many of them for whom, let's say, China or Russia had AGI today and the United States didn't. I would imagine they would use that to conquer. That's a really scary outcome for the world writ large, and I think it's one that the western world needs to spend a lot of our thought and effort towards preventing that outcome. Given that concern, should we not have closed systems?

Harry Stebbings
Obviously, open systems have a lot of benefits, but the challenge with open systems is anyone can use them, and that means that Russia can use them, China can use them, and everyone has same levels of access. Well, supposedly. So should we not have closed systems? With. With what you just said?

Alex Wang
I think there's a bit of a dichotomy that must emerge. I think we need to think about the most cutting edge and the most advanced systems. Those we will want to ensure are closed for geopolitical reasons, for military reasons, for whatever reasons. Like, as we develop systems that are genuinely so, so powerful, we'll want to keep those closed. That doesn't preclude us from making open less advanced versions of technology that, frankly, just have the ability to produce a lot of economic value.

And I think that's where we are with llama right now. Like llama three, in and of itself is not a military asset yet. And I think that there's clearly a line underneath which I think it's totally fine to have open models. So that's what we need to be thoughtful about, is where that line is and when are we getting close to it. Before we discuss some kind of company building principles, which I do want to touch on ten years time, what does that foundation model layer look like?

Harry Stebbings
Who's independent? Who's been acquired? What does it look like? I think at its core, what we've seen about the foundation model race is that it is incredibly expensive and it is expensive to the level of, you know, these models have gone from costing hundreds of millions to a billion dollars to maybe multiple billions. I think in ten years time maybe they'll cost tens or hundreds of billions.

Alex Wang
There's just not very many entities that have that much discretionary capital to invest into these AI models. So naturally what will happen over time is AI effort. The foundational efforts will coalesce around nations or the large tech companies. Over time, basically you will see the most all these hyper profitable business models, whether that's a nation state or one of the hyperscalers, those will be the only entities that could possibly subsidize or underwrite these massive AI programs in the future. Already it looks like a battle of giants, but at that point it's even more a battle of giants.

Harry Stebbings
So do you agree with me in saying that you'll see all of the smaller players acquired by the large cloud providers? Your Google, your Amazon, your Nvidia, you name your large incumbents, but especially the large cloud providers, and have them integrated into their existing solutions. Yes, within maybe an asterisk that there's some of these partnerships that I think it'll be interesting to see how they play out. The OpenAI Microsoft partnership or the anthropic Amazon partnership. One of the most interesting questions of this technology era is how do these partnerships actually end up playing out long term?

Listen, I do want to touch on some company building principles. Let's start with. I can't remember the exact thing you said on PR. It was a brilliant statement. This was it.

Which is the best? PR is no PR. What did you mean, Alex? At its core, the traditional press industry is not particularly conducive to great companies being built. And let me be more specific around that.

Alex Wang
A lot of traditional press is oriented around generating clicks and so the traditional press engine will, it'll build you up and it'll generate clicks on the way up and it'll tear you down and generate clicks on the way down. This is in contrast to, I think, 20 VC and other direct outlets, so to speak, where founders and companies have a direct channel to get their message out and explain what they're working on. I think the other thing, and I actually think it's a little bit unfair, I feel for traditional media. I don't care about clicks. Like, yeah, we have sponsors, respectfully, if we didn't have them, we'd still be doing the show.

Harry Stebbings
I don't do sensationalist headlines. I'm not going to put some glossy thing. Scale AI predicts military devastation with this episode because I'm not there to just optimize for clicks. Exactly. Yeah, you're there to genuinely educate and explain what's going on to your audience.

It's almost unfair, though. Can you imagine if someone said, hey, I'm going to do scale AI, but I don't care if we lose money. You'd be like, oh fuck, how do I compete with that? Yeah, it's pretty stark. I feel like I've received more fair treatment testifying in front of Congress than I have from various media outlets over the years.

Alex Wang
It feels like this totally ridiculous statement. But I think we're in this perverse state of a lot of traditional media where the system itself, because of this very clique oriented approach versus a genuine educational approach, it almost has no way of being fully fair to the companies. I think the imperative is on the companies themselves to properly tell their story through direct channels and through podcasts and through avenues where their message won't be altered. I completely, I think this is why founder brand today is more important than ever. Because if you don't own your means of distribution, it will be contorted.

Exactly. It's kind of a shocking state of the world. But has that changed your strategy then? Yeah, I think for us we think a lot about how do we get the direct message out there and how do we develop, to your point, what are the purest ways that we can transmit and explain what we're doing? And this is a great example.

You'll ask me a question, I will answer exactly how I believe that I think. And this will go out to your listeners and your viewers. One of the purest forms of getting the message out there. I think one thing that people make a big mistake on those, they then try and build the direct channels for the companies and respectfully people don't follow scale. People follow Alex.

Harry Stebbings
It's much easier to build followings with personalities than it is companies. I think this is true. I think there's so few companies that can. OpenAI is one where, like, I think OpenAI as an entity has, like, a lot of. As a lot of meaning as a brand, but it does.

But if you look at the amount of times that Sam Altman trends versus the amount of time that OpenAI trends, it is disproportionately higher. To Sam Altman. People still now more than ever, love the cult of personality. Yeah, that's a fascinating thing. I mean, that definitely should be.

And that transcends, actually, when you look at Lionel Messi at Miami, when you look at Margot Robbie with Barbie, the celebritization of individuals in organizations or in movements drives everything. That's fascinating. I mean, it probably speaks to a deep human need. I think we as people, we have a lot of circuitry to understand individuals. We have this ability to understand individuals.

Alex Wang
It's very hard to understand what an organization means. There's no intuitive. So, Shib founders give a shit about traditional pr. Should they care about getting in the traditional press? I would argue no, I would argue we're in an era now where they shouldn't.

They should think about what is an interesting point of view they can have. What is the most pure way to get that point of view across? When do you feel the press tried to tear you down unfairly? We've had. I would say that, like, almost precisely, we've had this.

This story where we had an incredible rise up and an incredible come up. Maybe we initially became a unicorn back in 2019, and for the few years after that, you know, it felt like smooth sailing. And then starting in about 2022, right when it was, or the entire media narrative, let's say, was tearing down tech companies, because, I mean, in some ways, it was very fair. Many, many tech companies received very high valuations. There was an incredible amount of excitement in tech, and then the markets all crashed.

That in starting in 2022 was when I noticed for us specifically, the tone entirely shifted where it was the media engine pointing itself towards, pointing out all the missteps from companies like us or a lot of our peers versus trying to point, you know, trying to take a balanced perspective. And there were even, you know, another example of this is so starting about 2020, we began working with the us military and the US DoD. This is obviously long before the current defense tech hype wave and long before all that, but it was it was driven by an intrinsic belief that we had as I had and we had as a company, that it's important for the United States DoD to have access to incredible AI technology that was like a fundamentally important thing for the future of the world. And in the years after that, by and large, the traditional media engine actually tore us down for supporting the us government and supporting the military versus taking a broader view that maybe this was a positive thing actually to support the us military. This almost goes to what I was saying about the dichotomy treatment and testifying in Congress versus with the media.

I testified before Congress about AI's use in the military. I would say the treatment I got there, I think was a properly broad one. It was, hey, obviously this is powerful technology that we need to be thoughtful of, but it is so important that America leads in this. And, you know, thank you for everything that you're doing. That I felt was the response.

Whereas in the media, it's this incredibly scornful perspective around, is this a good thing? Do we trust this company? Like, what does this mean? I mean, it's this shocking, but I. Think it goes back to the incentives drive outcomes.

Harry Stebbings
And what is the incentives of media versus the incentives of Congress? Congress are not there to sell you or clicks. They're there to hopefully get to an informed decision on the best outcome. Exactly. Yeah, on the incentives drive outcomes.

I loved something you also said. You said why hiring people who give a shit is harder than it sounds. What do you mean and how do you think about that when hiring? If you hire people who we say give a shit internally, but who really, really care, they really, really care about their work product. They really, really care about the quality of their work.

Alex Wang
They really, really care about the organization. They care about making sure that the company has an impact. They just really care. And what that means, how that manifests is they're willing to sweat every single detail. And if they get roadblocked or there's like something in their way, they'll spend the extra, they'll go the extra mile to make sure that they get through those things.

That's how startups work. These small teams of people who each care ten times more than the average employee, ten or 100 times more than the average employee. Inside a big company, you end up just solving so many more problems than the big companies. How many people do you have in scale today? We are about 800 people.

Harry Stebbings
800 people. You are now getting to the bigger company size. It is harder. You know, the kind of only hire a players or a players a players, by definition, are rarer. Can you have 800 a players?

Alex Wang
I think the answer is yes. You know, what we say a lot internally is how do we hire the Navy SeALs? Not the Navy, not that there's anything wrong with the Navy, but how do you have a really small, elite group where you're really hiring the cream of the crop? And this goes down to process, you know, for us at this point in the company still, I approve every hire. I will either directly interview or look at the interview feedback and look at the understand every single person who we hire to ensure that we're keeping an exceptionally high bar.

Harry Stebbings
And that way, if percent of the time, will you go against the recommendation of the team on a new hire? Maybe on average, 25% to 30%. Like a lot? Like a lot. And I think usually it's due to maybe there's a new hiring manager who needs to get calibrated, or it's like an edge case of various forms.

Alex Wang
But to me, the way I think about this is I, as the founder of the company, I have seen everybody who's come in, and I've seen who succeeds and who fails. I have, as an algorithm, almost like, developed the most fine grained data set of understanding what it looks like for people to be successful at scale and what it looks like to have the Navy SeALs versus the Navy. And it's my job as a founder to help ensure that we, as an organization, are actually utilizing all this knowledge and utilizing all this learning that's been happening over the past eight years in the organization and carrying that forward. Final one, what was your biggest management or leadership fuck up? So, like, for an example for mine, I'm like, people act out of fear or freedom.

Harry Stebbings
You know, when you bring someone in, some people act out of, like, you have to perform, you have to perform, and other people act out of, hey, I trust you, I respect you, do your best work, and you just have to identify which camp someone's in, and then hopefully, if their skills are there, they should operate to their best. I wish I'd known that when I started, and I didn't, and I just tried to act out of fear for everyone there. What do you know now that you wish you'd known, and why did you fuck up? The biggest one was actually in the same era of, like, 2020. 2021 was thinking that hyper growth as a company meant that you had to hyper grow your team.

Alex Wang
In those few years, we did what a lot of tech companies did. We, like, doubled, tripled the team year on year. In 2020, we were about 150 people. By the end of 2022, we were over 700. It was this insane amount of hiring and this incredible amount of hyper growth as a team.

What I found out is when you hire that quickly, it is impossible to do what we've just been talking about, which is maintaining this high bar and maintaining this feeling of excellence within the team. Did you see the reduction of that bar in real time? It was kind of subtle. It was something where you would hire all these people in and you notice it like the next year or the next six months later, you would notice it slowly. And that the organization, there were challenges that the organization used to be able to deal with and solve that slowly just calcified and we weren't able to, we weren't able to get around.

And so you'll notice, you know, from the end of 22, where I said we were 700 people, to now we're 800 people, team is mostly kept the same size, but the company, the revenue of the company has grown dramatically. It's funny, companies have like brand inflection points. They go hot, they go cold, they go hot again. Do you know what I mean? It feels like from the outside scales, hot again.

Well, that's, I don't mean that, like. To be super nice or not nice. I didn't mean it when saying you're cold. But it's just weird how brands have moments of heat and not heat. This is a fascinating thing, actually.

I actually asked Petra Colson this question as well. And Stripe obviously is an incredible company that has, for a lot of their lifetime, I think, has been one of the iconic Silicon Valley companies. And I asked him whether or not he thought that the fact that they were such an iconic company was beneficial in all the hiring they did. And he made an interesting point, which was that the best people they hired, he thinks, would have been people who would have joined whether or not they were the hottest company in Silicon Valley. It was these sort of like off the beaten path people who are actually the best hires they could have gotten.

And a lot of the people who joined because they were the hottest company in Silicon Valley for one reason or another, weren't necessarily the most valuable employees. And so there's this element where I think the common belief and the common narrative is like, you want to be the hottest company so you can track the best talent, so you can hyper grow, so you can then go keep growing. And I think that's often so, so difficult. And it's much more about like, how do you develop an ecosystem of talent that is like very self preserving, keeps a very high bar and always seeks out and searches for the best people. And then independent of whether the company's hot or not hot, because you will have to your point, you'll have moments where you're hot, moments where you're not hot, moments are hot, not hot.

And you need that talent ecosystem to be self preserving, independent of that, to drive the best out. I also think it depends on function. Like when you look at a lot of go to market functions, traditionally for sales, they do concentrate towards hotter brands. And actually, if you can get a concentration of incredible salespeople, especially as you expand geographies, I'm thinking of OpenAI's kind of go to market team in London, unbelievably good, one of the best in London. And it's because they have an amazing brand.

Harry Stebbings
Do you see what I mean? So it depends how close you are to the nucleus and what function you're in. Yeah, I think that's right. Yeah. Yeah.

Alex Wang
But then if you look at core technical development, OpenAI, a lot of that is still driven by people who have been at OpenAI since before they became the hottest company ever. Another company that I think experiences or went through. This is Airbnb Brian Chesky. I think that he's talking about this publicly. After the pandemic, he all of a sudden realized, hey, he had to rebuild the entire company and he massively shrunk the team.

He invested a lot more into talent density and then he built the team to remain small. And I think that there's even now the most, or one of the most profitable companies per head in all of tech. And that's because of this sort of this realization that he had that he didn't need to keep growing that team to see the financial gains or the financial output. Listen, I want to do a quick fire, so I'm going to say a short statement. You give me your immediate thoughts.

Harry Stebbings
Does that sound okay? Yes, let's do it. Okay. So what have you changed your mind on most in the last twelve months? I think it's actually everything about this hypergrowth stuff that we've been talking about.

Alex Wang
And it's really around divorcing team hypergrowth from company hypergrowth and extra investing into quality and excellence. What's the biggest misconception you hear most often about AI? I think the biggest one today is all that's between us and AGI's compute. And I think we need data to get there too. Tell me, you can have any board member in the world who you don't currently have.

Harry Stebbings
You have an amazing board, but who you don't currently have, who would you choose as your next ball member? This is a great question. You know, I think, obviously, I don't think this is practical, but I do think Satya Nadella has been one of the most brilliant business strategists of the modern era. What he has accomplished at Microsoft is just staggering, and I think any board would be very lucky to have him. Unfair one for me to ask, but I actually like it.

Which is what question are you not asked or are you never asked that you feel you should be? That's an interesting one. The interesting one is how my perspective on AI has changed in the successive eras. And I mention this because I started the company in 2016. The first three years of the company were just full focus on autonomous driving and autonomous vehicles.

Alex Wang
And then in 2019, we actually started working on generative AI. We started working on the OpenAI, on GPT-2 and so we are one of the few AI companies that I think has seen multiple eras of the technology and has seen the sort of the first boom and bust cycle with autonomous vehicles. I think it's an interesting one, which is like, what's the same in these successive eras and what's different? That's an interesting question. How's your view changed?

Harry Stebbings
Are you most excited now? I'm quite excited, but I think there's also reasons to be cautious in autonomous vehicles. One of the things that happened in the autonomous vehicle craze, there were a lot of promises that were being made that were divorced from the technical reality. And so a lot of the prominent autonomous vehicle companies, a lot of the prominent organizations were making bolder and bolder promises to be able to raise money. And those were, at first, they weren't super divorced, but over time, they became more and more divorced from the technical realities, and that resulted in this very painful trough where it's sort of, the promises weren't met.

Alex Wang
And so it felt like the entire industry was falling apart. And actually, at the end of the day, you know, now we have Waymo's driving around San Francisco, perfectly proper l four autonomous vehicles driving around Tesla Autopilot has gone really good. If we had more measured promises along the way, I think now we would feel amazing about autonomous vehicles, whereas instead we went through this huge up, this big down. And maybe it's sort of like, on the upswing again. I think this is one of the biggest concerns I have about generative AI, which is, I hope not, but the same thing might happen again, which is that we have these really big promises that are starting to get made about the technology that get divorced from technical reality and then that creates this sort of gap that is bound to cause a hangover.

Harry Stebbings
Will Trump win penultimate one? I still think it's a toss up, actually. Us elections are so strange to think about because it's always gets decided by the swing states. Frankly, I don't trust anyone on the coasts to have any fine grained understanding of how the swing states will play out. I have no idea.

Alex Wang
I don't think anybody should listen to anybody who lives on the coast to understand what's going to happen. It always boils down to the swing states. Final one for you, my friend. Where's scale in ten years time? You know, hopefully doing something very similar to what we're doing now, which is continuing to be the data foundry for AI and serve the data pillar for AI progressive.

Harry Stebbings
Would you like to go public? For sure? Yeah. Well, one thing I think a lot about is how do you solve problems that will never go out of style? But would you like to be the CEO of a public company?

Do you know what I mean? You look at the Colsons, you're like, I don't know why you would if you were stripe. There's clear benefits to being a public company for sure, but I think stripe is an incredible company in that they can be incredibly profitable and they can accomplish all their core financial goals without needing to go public. Listen, Alex, I loved having you on the show. Thank you so much for joining me.

As I said, it's so much nicer to do this in person. I'm sorry for the many kind of meandering pivots and turns, but this was fantastic. Yeah, this was a lot of fun. I have to say. I absolutely love doing that conversation with Alex.

I want to say a huge thank you, Tim, for being so open and honest with quite a few of those revealing questions. If you want to watch the full episode live in the studio, then you can check it out on YouTube by searching for 20 vc. That's 20 vc on YouTube. But before we leave you today, let's face it, your employees probably hate your procurement process. It's hard to follow.

Unknown
It's cobbled together across systems, and it's a waste of valuable time and resources. And as a result, you probably, probably are facing difficulties getting full visibility, managing compliance, and controlling spend. It's time for a better way. Meet Zip, the first modern intake to pay solution that can handle procurement and all of its complexities from intake and sourcing to contracting purchase orders and payments. By providing a single front door for employee purchases, Zip seamlessly orchestrates the procurement process across systems and teams, meaning you can procure faster with the least amount of risk and get the best spend ROI for your business.

With over 4.4 billion in savings for our customers, Zip is the go to solution procurement for enterprise and industry disruptors like Snowflake, Discover, Lyft and Reddit. Finally, a solution employees love to use where buying things for work just works. Get started today@ziphq.com. 20 VC and speaking of game changer with Zip, we have to talk about Cooley, the global law firm built around startups and venture capital. Since forming the first venture fund in Silicon Valley, Cooley has formed more venture capital funds than any other law firm in the world.

With 60 plus years working with VC's, they help VC's form and manage funds, make investments, and handle the myriad issues that arise through a fund's lifetime. We use them at 20 VC and have loved working with their teams in the US, London and Asia over the last few years. So to learn more about the number one most active law firm representing VC backed companies going public, head over to Cooley.com and also cooligo.com comma Cooley's award winning free legal resource for entrepreneurs. And finally, travel and expense are never associated with cost savings. But now you can reduce costs up to 30% and actually reward your employees.

How? Well, Navan rewards your employees with personal travel credit every time they save their company money when booking business travel under company policy, does that sound too good to be true? Well, Navan is so confident youll move to their game changing all in one travel, corporate card and expense super app that theyll give you dollar 250 in personal travel credit just for taking a quick demo. Check them out now@navan.com. tuzia Zero VC as always, I so appreciate your support.

Harry Stebbings
Really, it just means the world to me and the team here. And stay tuned for an incredible episode this coming Friday.

Unknown
And stay tuned for an incredible episode this coming Friday.