How AI adds to human potential: (LIVE) with Scale AI's Alex Wang and Intel's Lama Nachman

Primary Topic

This episode explores how artificial intelligence (AI) can enhance human creativity and potential, rather than just automating tasks.

Episode Summary

In a lively discussion on the "Masters of Scale" podcast, hosts Jeff Berman, Alex Wang from Scale AI, and Intel's Lama Nachman delve into the evolving role of AI in augmenting human capabilities. The conversation, recorded live at the Intel Vision event, focuses on the democratization of technology through AI, its impact on industries like film and legal, and the ethical considerations of AI development. The episode reveals differing perspectives on AI's role in job creation and displacement, and the potential for AI to drive significant societal advancements in areas like climate change and healthcare.

Main Takeaways

  1. AI can democratize creative industries, making tools accessible to a broader range of creators.
  2. Job displacement by AI is nuanced, potentially reducing tasks but also creating new opportunities.
  3. Ethical AI development is crucial to balance automation benefits with societal impacts.
  4. AI should be developed with the intent to complement and augment human abilities, not replace them.
  5. Regulatory frameworks are necessary to manage AI's societal impact, especially regarding privacy and data security.

Episode Chapters

1. Opening Thoughts

Jeff Berman introduces the topic of AI's potential to enhance human creativity and its implications for various industries. Jeff Berman: "There's so much talk about AI taking things away from people."

2. The Power of AI in Creative Fields

Discussion on how AI tools like OpenAI's new video generator can transform content creation. Alex Wang: "It's a huge unlock for what film could be."

3. AI's Impact on Jobs

Exploration of how AI might affect employment across different sectors, emphasizing the role of AI as a "co-pilot" in various professions. Lemma Nachman: "Making us more efficient will mean less of us will be required for that same amount of work."

4. Ethical and Societal Considerations

The conversation shifts to the ethical development of AI and the need for regulations to ensure privacy and data security. Lemma Nachman: "I think it should absolutely be up to people. It's your data."

Actionable Advice

  1. Embrace AI tools to enhance productivity but remain vigilant about their ethical use.
  2. Stay informed about AI developments to leverage new technologies effectively.
  3. Participate in discussions about AI regulation to ensure responsible deployment.
  4. Consider AI's potential to democratize access to technology across various fields.
  5. Explore AI's capabilities in creative endeavors while safeguarding intellectual and artistic integrity.

About This Episode

Generative AI is advancing at a breakneck pace, prompting questions on risk and opportunity, from content creation to personal data management. In a special live recording, we delve into the ways AI can augment human work and spur innovation, instead of simply using AI to cut costs or replace jobs. Host Jeff Berman joined a seasoned AI researcher, Intel's Lama Nachman, and a young start-up founder, Scale AI's Alexandr Wang, on stage at the Intel Vision event in April 2024. They explore topics like AI’s disruption of creative industries, mitigating its biggest risks (like deepfakes), and why human critical thinking will be even more vital as AI technology spreads.

People

Alex Wang, Lemma Nachman, Jeff Berman

Companies

Scale AI, Intel

Books

None

Guest Name(s):

Alex Wang, Lemma Nachman

Content Warnings:

None

Transcript

Tucker Ligursky
Hi listeners, it's Tucker Ligursky. As a researcher on Masters of scale, I use the innovations of AI on a daily basis, and with an increasingly overcrowded market, it's hard to discern which ones are truly beneficial for my work, not to mention safe. That's where Grammarly comes in. My job requires me to prepare dozens of documents every day, and with Grammarly, I'm able to make my dossiers and research documents clear, more direct, and more concise. In fact, I recently discovered that Grammarly users spend 66% less time editing content, which is a huge advantage since our most precious resource is undoubtedly time.

What I love about Grammarly is its commitment to responsible AI. My team works tirelessly to produce top quality podcasts, and its vital that we keep our data secure. Grammarly has been around for 14 years and has maintained a business model dedicated to never selling your data, which means you can trust it with your most sensitive information. Join me and over 70,000 teams who trust Grammarly to work faster and hit their goals while keeping their data secure. Visit Grammarly.com to learn more.

That's Grammarly.com dot.

Aaron Bastanelli
Hi, listeners. It's Aaron Bastanelli, an audio engineer here at. Wait, what? As you know, behind every successful business is a story on the podcast. How I built this host guy Raz talks to the founders behind the world's biggest companies to learn how they built them.

In each episode, you'll hear entrepreneurs share moments of doubt, failures, and how they were able to overcome setbacks on their way to the top. How I built this as a masterclass in innovation and creativity, a how to guide for navigating life's challenges from the people who've done it all. Follow how I built this wherever you get your podcasts, listen early and ad free right now on wondery.

Jeff Berman
Hi, it's Jeff Berman, your host for Masters of Scale. There's so much talk about artificial intelligence taking things away from people. But on stage recently, I had the opportunity to speak with two leaders in the field about how AI can increase human potential, lowering barriers to entry for fields like content creation. How do you design human AI systems that can actually get you the essence of what human creativity is? And how do we think about what people need to bring their creativity to the table?

Lemma Nachman
You're actually democratizing a lot of these technologies, so now you're enabling a much wider spectrum of people to bring their creativity to the table. It's a huge boon for storytellers. The question for us isn't if we looked at the stories or the movies or the short films that are being done today. How efficiently could this be done in the future? That's Alex Wang, co founder of the company Scale AI.

Jeff Berman
And before him, you heard Lemma Nachman, intel's director of human and AI systems research. I joined them in conversation at an event called Intel Vision in early April. The event brought together business leaders focused on the challenges and opportunities presented by artificial intelligence as AI advances at a blistering pace. It's a growing part of every business. In addition to being an industry sector all its own.

I took the stage with Lemma and Alex to record this episode of Masters of Scale in front of a live audience of active stakeholders.

Oh, God, you guys are amazing. All right, take a seat.

Alex Wang
You gotta have incredible talent at every position. There are fires burning when you're going home. Can you believe it? You're such an idiot. And then you go back to this.

Lemma Nachman
This is totally gonna be amazing. There are so many easy ways I'm. Supposed to know what to do. I have no idea what to do. Sorry, we made a mistake, but you.

F
Have to time it right. Oops. Working at a three bedroom apartment, stuff. That just seems absolutely nutballs ten years later. Well, that's just how you do it.

Lemma Nachman
We haven't made just how you do it.

F
This is masters of scale.

We'll start the show in a moment, after a word from our premier brand partner, capital one business.

G
I woke up in the middle of the night because I had this nightmare that we were front page news that we've done the stupidest mistake of our life by making the spirit. That's Aparna Saran, chief marketing officer for Capital one business. And she's recalling a moment from her previous position at Capital one when she was heading up a team designing a new business card. We had just made the decision to go all in and sunset the prior version of the product, which was honestly the cash cow for our business. When we made that decision within a senior leadership meeting.

As someone who had been on the journey to build this out for five plus years, it was really exciting. But by the time the weekend hit, I started to feel the responsibility and the pressure. We are taking this big bet on something that I've built. Perhaps you've been there, you've made a pivotal decision. And then panic sets in.

F
How would Aparna calm her butterflies and steer her team through this pivot? We'll find out later in the show. It's all part of the Refocus playbook, a special series where Capital one business highlights stories of business owners and leaders using one of Reed's theories of entrepreneurship. Today's Playbook Insight have multiple plan B's.

Jeff Berman
One of the most breathtaking advances in AI came just a few weeks ago when OpenAI announced a new tool called Sora. It generates AI video from a text prompt. You type in details of a scene, the characters, the setting, the actions they're taking, and Sora creates a video from just those words. It's expected to be released to the public later this year, but Masters of Scale was granted access to experiment with its beta release. Now it doesn't do dialogue yet, and of course you can't see a video on this audio podcast, but I had to start the show with a Sora demo, and last night some of you may have been harassed by me asking for some prompts for some ideas.

And I met Kathy and Sarah. I don't know if Kathy and Sarah are in the room today, but they gave me a great prompt, which if we can roll the next video we got last night, and within an hour, we turned this around. Sarah and Cathy said, show me a french bulldog wearing a red fedora in Paris eating a croissant. So what pops up on the screen is a realistic, adorable dog in this chic parisian setting, chomping a delicious pastry. Who doesn't like french bulldogs wearing fedoras eating croissants?

So you can see how quickly the pace is going. That's just one ambient scene. But recently, Sora released an entire short film with multiple locations, voiceover, character development, and real pathos. It looked a lot like an Oscar nominated short. All of this is shaken up Hollywood big time.

Tyler Perry stopped work on an $800 million studio expansion because he just doesn't know what AI is going to do to film and tv production. Jeffrey Katzenberg, one of the legends of the industry, said 90% of animation jobs are likely to disappear in the next few years. So I started there with Scale AI's Alex Wang and intel fellow lemonachman, and I asked, is this the end of Hollywood as we know it? So that's an interesting question. What we have seen as more capabilities have come up is much more interesting innovation that actually came out.

Lemma Nachman
So, and it's really, I mean, it's a choice that we make, right? Because you, every time there is an interesting technology that has the ability to automate, you can go the path of automation, or you can go and figure out how do you design human AI systems that can actually get you the essence of what human creativity is. It's like the complementarity between AI and humans. Of course, that will only happen if we design systems with that in mind, so it could go that way. I love that optimism, and I feel that.

Jeff Berman
And yet, Alex, are you as optimistic about this, or do you feel like we're entering kind of this interregnum where the old king is dead, but the new king is not yet born? It is something like an interregnum. I think it's a huge unlock for what film could be or what films can be. One way to think about this is a lot of films now have tens of millions, hundreds of millions of dollar visual effects budgets. It's true.

Alex Wang
They won't have to spend that much on visual effects going forward. The original advent of animation, advanced 3d animation, as Pixar sort of introduced originally, was this huge change. There were these kinds of movies that you could create that you could have never made before, that were somehow significantly more relatable to children and much, much more meaningful to children. Created a new canvas. I really think about as an evolution in that vein, which is we're continuing to create more engaging, imaginative, and powerful platforms for creative content.

And Hollywood, as a business is going to have to get on board with that. Do you think job displacement is coming in a meaningful way before these kind of new industries are created, new jobs are created? The pattern that we see over and over again is something like this. There's a. A very talented team or group of people within the company, whether it be engineers and coders or lawyers or consultants, who maybe 20% of their job requires very high ingenuity and capability and all of their training and all their brilliance, but 60% of the job is relatively rote, or is something that fundamentally does not actually require all of their training and brilliance and capabilities.

The pattern we see over and over again is how do you build copilots that are able to help them do the 60% of other work and then maximize their time so they can spend most of it doing the stuff that actually requires their integrating their capability. That's what we've seen with coding, where what the coding assistants and copilots do is they help not do all the boilerplate code or all the simple fixes or catch bugs more easily, stuff like that. Or the legal assistants help you just draft the entire document. But all the sort of important legal decision making is still left to the lawyer. This is like the pattern of the future, which I think is a pretty optimistic one, because this does not in itself result in labor displacement, because most of these jobs that we're talking about, there's a shortage of them in the economy.

We need more doctors, we need more engineers. We need more of them in the world. And I think this goes towards what I see as the longer term trend, frankly, which is I think that the AI systems are moving towards being superhuman in some ways, but meaningfully subhuman in others already. GPT four is a much better writer than I am, just in terms of not making grammar mistakes and being able to structure very flowing prose and being able to write very beautifully. But it's much worse than mere reasoning.

It's much worse than me at thinking in sort of like very long forms. It's much worse than me at getting factuality correct. It's much worse than me in some other ways. And so the direction of the future is going to be these hybrids between human capability and some superhuman AI capabilities to achieve better outcomes. Lemma, I hear Alex and I want to be all in with Alex on this, and yet I have a friend who runs a small law firm, and because of the productivity gains they've been able to make using AI, they've been able to reduce the number of paralegals they need.

Jeff Berman
Right? It's logical. So that efficiency game, the ability to focus on higher level work, makes a ton of sense. And thinking of AI as a copilot, right? Microsoft did a great job branding their product there.

But why logically, why shouldn't companies then say, well, we can reduce costs by 40% at the same time? Yeah, and they can. And that will happen. The question is, is this creating an economy where the demand of all the new things that can happen will end up resulting in an overall more opportunities for jobs? There's no question that there will be a lot of jobs also that will be displaced.

Lemma Nachman
Right? Making us more efficient will mean less of us will be required for that same amount of work. But the assumption here is that it is that same amount of work. All of that is really starting from this notion that it's a zero sum game. I think that's actually kind of the missing piece of that puzzle, right?

Because I think it's. We're in such a generative world where opportunity and you new things that we haven't even dreamt of doing will be made, and that will impact certain jobs, that we're limited or unneeded in a certain way. But that means it's an opportunity as well to continue to innovate and differently from previous innovation. The reason I'm actually optimistic is that if we build AI systems in ways that can support and amplify human capability, then in some sense it's like, yeah, and that capability is helping the human continue to evolve. And do you think there will be that gap?

Jeff Berman
Do you think that we'll see massive job loss over the next few years before new jobs come up, or are you more optimistic? I think that it's really hard to guess whether the ability to transition to different types of jobs, being enabled with these AI capabilities, if that's going to help people bridge that gap or not. Being an optimist, I see this as a place where we can train people very quickly with new types of skills because of the fact that AI can actually be a support system. But I would be shocked if there was no, like, dip in some certain areas and fields for sure. Alex, do you agree?

Alex Wang
I think it's certainly a possibility. And I think that the key, I think, for us as a society is that AI fundamentally should not just be a cost cutter. It should be a tool that creates new business models. It should be a tool that creates a lot of economic growth. It should be something that really spurs a huge amount of new products, new business models, new direction and new capabilities outside of just displacement of human labor.

Jeff Berman
Massive changes are surely coming to the labor market because of AI. And Alex and Lemma have very different views of the field, influenced in no small part by where they are in their careers. Alex is a 27 year old founder of an AI startup who dropped out of MIT to go into business. Lemma is a researcher with decades of experience in studying technology. I ask them what business opportunities and jobs they see coming.

Thanks to AI, we have just so much uncertainty in the world today, and this feels like it's yet another layer of it. I'm curious, Alex, you dropped out of MIT, right? College dropout made good. Lem and I were talking before we came on stage. We both have 17 year olds who are anticipating going on to college in a year and change.

What are the skills that you think young people need to develop today for the economy that's coming? I think prompt engineering is a very important skill. The knowing how to interface with AI systems is an incredibly critical capability that is, I think, is very akin to software engineering over the past few decades. But if you take a big step back, what are these algorithms really good at? They're good at things that are present in their data set, and most of them are trained off of the entirety of the Internet.

Alex Wang
One thing that's not present on basically any of the Internet, or very sparse in all of the data that these models are trained on is very consistent and thoughtful, long form thinking. So let's say you're given a very tough problem at work or in school, and it takes like, you have to try one thing and it doesn't work, and you have to try another thing and it doesn't work. And it takes maybe 30 or 40 steps to really work through it. And I think for most of us in our jobs, most of the hard things that we do are akin to this. If you have to organize a very complex project, there's a lot of trial and error that goes into it.

Oftentimes in the field, we call this agentic behavior or agent behavior. So how can the model actually be able to deal with new information and make choices and kind of like, think through many, many steps? One thing you'll notice is the models today are pretty bad at this. They're very bad at thinking multiple steps. They usually make a mistake on the third or fourth or fifth, sort of like what's called a reasoning step or chain of thought.

I think this is something that humans will always be differentiated because we're very good at long form reasoning and very good at, like, thinking over very long time horizons. Models are not good at that fundamentally. They're very good at predicting the next token. They're not good at like, you know, thinking over a very long time period. I again want to be with Alex on this, and yet I look at the evolution just of AI video, for example, in the pace.

Jeff Berman
It's unbelievable. Do you agree that the models can take a long time to get there on the long form, or is this actually just one breakthrough? There is no question that that's where AI struggles. And a lot of people come to me and say, oh, what do you think people should study? The one thing that I would say is really critical, right.

Lemma Nachman
Is really critical thinking and reasoning. Right? Especially given the fact that you can't even trust what an AI system will even generate, right? That's part of my concern around a lot of the state of where AI systems are today, even though we think about it as an equalizer and it improves equity and democratization, all of that, in reality, people who are experts can take what they want out of it and know what to ignore. And it's really people who don't have that capacity are probably more vulnerable to the mistakes that these systems actually make.

Jeff Berman
This is where what you and Alex are saying dovetails really nicely, because prompt engineering and critical thinking reasoning actually go very much hand in hand. I want to just give a quick personal example and segue into really personal AI. Seven weeks ago, my 14 year old and I were having lunch, and we were planning a spring break ski trip. We went on chat GPT, and we asked it to compare this year's weather data, using Bing, to the past ten years of weather and snowpack data. We included a few other variables to avoid altitude sickness, ease of travel, et cetera.

It sped out a suggestion, ended up being a terrific trip. What I'm imagining is the next step, the integration with my calendar, the integration with my purchase history, my credit card, my bank account, etcetera. And the leap from asking that one question through that one prompt to surfacing flights, car rental, a hotel, restaurant, reservations, et cetera, feels quite short. Alex, how close to that do you think we are? I think that, like, what I'll call l two level that's already basically here.

Alex Wang
So I'm on the board of Expedia. Expedia has launched a generative AI travel assistant. Plenty of customers use it. They find it very, very helpful and useful. As part of that, Expedia built models that were sort of fine tuned and customized to be really good at these travel workflows and sort of these travel agent like questions and answers.

And so I think that's more or less here. And I think the key, key unlock is actually the consumer experience on how to make that really natural and easy and flow into all the other behaviors. What I'm excited about, and I think is actually closer than we think, is the next unlock that we're talking about where it's actually autopilot, true autopilot for everything. And that reliability boost in the models, I think, could come here faster than we expect. I think there's a question about what type of an interaction and engagement one wants, right.

Lemma Nachman
I love the example of autopilot. So I have a Tesla, and I've tried self driving, and it drove me crazy. And I was thinking about this as, like, I use self driving Raiche and Waymo in San Francisco all the time. I don't have a problem with that. But I'm sitting in the backseat.

I'm not trying to control its behavior. Right. That's all I want. If I'm sitting behind the steering wheel, that's not my natural interaction, right. I am wanting to control that experience.

I am wanting it not to change lanes when I don't want it. Like, why are you changing lanes? That makes no sense. Because I'm trying to map it into my own control level and thinking of what I want done, right. So it's really a big part of whether you know that we're ready for what experience has a lot to do with what type of interaction we're actually enabling with these systems.

Jeff Berman
Right. Is context dependent. Exactly. I'm actually, I'm hearing two different things, right. One is humans are notoriously bad self reporters, right?

We are really bad witnesses, even about ourselves. And so we say we want this, but really we kind of want that much of the time, right? So there's a version where we say, I'm headed to Phoenix for the intellivision summit, and I'm looking for flights in this timeframe and whatever, and, okay, great. There's also a version where it's got access to my calendar, it's got access to my email, it's got access to my flight history, and it's able to say, jeff, it looks like you need to go to Phoenix for the intellivision summit based on your past history. Here are the three flights I'm recommending for you.

And so I'm going to push into the privacy question here, because this kind of AI personal utopia state of it inferring with 99.7 nines accuracy what I'm likely to want. Sounds beautiful, but like, should we be scared about the privacy components? We should absolutely be scared. Where do you see this going and where should it go? On the privacy front?

Lemma Nachman
Clearly the privacy issue has been around for quite some time, even before Genai. Right. I think clearly the need for massive amount of data to train these systems has made the problem much worse. Right. That in some sense is a little bit different than the does it know everything about me so that it can hyper personalize whether you're talking about a consumer or, you know, any other type of application, the reality is these systems are being trained on our data to be able to actually get to that level of intelligence.

Right. Irrespective of whether it's actually personalizing our needs. So I think that can only be solved by regulation. Right. Because you don't think a robust open marketplace will solve that problem.

Jeff Berman
It's going to require government intervention. Yes. An open market is something that will bring privacy as a value proposition, and people can always give up whatever they don't care about. And that's true, but there needs to be regulation to ensure that actually the privacy constraints that people are claiming are actually true. Right.

Lemma Nachman
I think it should absolutely be up to people. It's your data. You should be able to say, I want to give it up or not give it up. Like, I think the reason people have not taken care of. Privacy in that way is because they need that data, right?

It's not because it's impossible to make these systems private. You can build systems that can do that and not have your data being taken away and used to train other systems. But today, the reason it's not happening is because that's not advancing the state of AI. More generally, the need for more and more data to advance AI bumps up against the need for government regulation. We'll hear more on that from intel's lemonachman and scale AI's Alex Wang after the break.

F
We'll be back in a moment after a word from our premier brand partner, Capital one business.

G
There was panic that set in that night because I didnt want to let people down. Were back with Aparnas Aaron of Capital one business. She was recalling the time she woke up in a cold sweat, terrified that the new product she had been working on might fail. So the next morning she sat down and wrote an email a Sunday morning. And I said, you know what?

Im going to just like share this with my peers. It was very emotional. It was like sort of a cry for help. Aparna realized that if the new product didn't take off, she needed a plan b, preferably multiple plan b's. I'm inviting them to be the thought partners so that we are mitigating as much risk as possible and we have contingency plans in place as we make this move.

You write something like this and your heart is pounding. Should I send this? It was a super vulnerable moment for me, but then I was like, I'm going to just send this. Like, what's the worst that'll happen? It can't be worse than being on the front page of the newspaper.

F
So she held her breath and hit send. What happened next would surprise even her. We'll hear about that later in the show. It's all part of Capital one business. Spotlight on business leaders following Reed's refocus playbook.

Jeff Berman
I'm Jeff Berman, and this is Masters of Scale. You can find videos from our interview catalog over at the Masters of Scale YouTube channel. Here's more now of our live taping at the Intellivision summit on the dizzying pace of AI and the consequences for business leaders.

Alex I spent four years early in my career as chief counsel to member of the Senate leadership, which I feel makes me especially well qualified to say how spectacularly unqualified Washington is to regulate AI at this stage. You're spending a lot of time in DC. What are you seeing from Capitol Hill in the administration. And are you bullish on the path toward regulation, or are you bearish? What we've seen over the past year, basically, ever since the launch of Chatshipti, has been an incredible amount of engagement from all of DC, I would say Capital Hill, from folks in the White House, from folks across many of the departments in really understanding what the risks are.

Alex Wang
And what risks do we need to be very concerned about, such as deepfakes? What are all of the various risk factors that the technology holds, as well as what are the opportunities with the technology? If in the United States, we're going to have private enterprises investing tens of billions, potentially hundreds of billions of dollars in the future to build very powerful AI systems, we need the proper guardrails and safeguards in place to ensure these technologies serve the needs of humanity and don't have major risks associated with them. And the risks are massive. I mean, we got social media spectacularly wrong, right?

Jeff Berman
I mean, every bit of data says so. And yet, if we regulate badly or over regulate early, we're gonna stifle competition. We're gonna fall behind the rest of the world. Like, we can't have that. Here's the fear I have.

We've already seen deepfakes. We've already seen a deepfake robocall of President Biden in New Hampshire, York. The consumer platforms are, at best, going to be a half step behind. Right. What do we do about that?

Lemma Nachman
There are very different risks, and the solutions for these different risks are very different. Right. Maybe one simple way of thinking about it is thinking about people who are actually trying to do the right thing, but these systems are so complex that it's very hard to control. Then there is the other bucket, which is people who actually want to blow up the world, and they're actually utilizing AI in ways. You know, these are, like, the catastrophic events, the terrorist threats, things like that.

Jeff Berman
And there are a lot of those people out there. Let's talk about deepfakes specifically, right. Because I think deepfakes, to me, you know, I hear all of these stories about, like, how robots will destroy the world and all of the Hollywood narratives. I think the most likely thing to happen is that when people start to mistrust everything that they see and they don't know what's real and what's not there, that's what worries me the most. Right.

Lemma Nachman
Because then you can manipulate people to do anything. You don't need to have robots blow up the world. You can make people do that. Propaganda is a powerful drug. Exactly.

So you have to invest quite a bit into fixing the issue of detecting these systems, detecting deepfakes, looking in every possible way that you can look at that data and say, okay, the likelihood that this is actually deep fake. Okay, so Alex should meta and YouTube and TikTok and Snap be investing literally, probably the billions required right now to protect us. Is that, is that the answer? I think so. But it is a very difficult technical problem to be able to properly mitigate all the potential AI generated content that's out there, which to me lends to a question of, like, should we be funding a lot more research in this area?

Alex Wang
And research in that direction is not something that's sufficiently funded today. I'm a born optimist. I can see all of the AI dystopian scenarios playing out, and I lean into the ones that are more encouraging as we bring the conversation to a close. What do you think we're looking at a year from now? We're going, wow, that's incredible.

Lemma Nachman
For me, I think if we make roadways into climate change and material discovery, those are two areas AI can really transform. The way we actually do this work. And if we're able to make any dent there, that would have a huge impact on society. I think, unfortunately, knowing how governments run the regulatory side and where that might change fundamentally in terms of the responsible AI piece of that puzzle, I think that will take longer. To me, progress in pharmaceuticals in particular, is an area where I think we'll see very rapid progress.

Alex Wang
There's been incredible advancements in biological foundation models unrelated to AI, but there's been huge advancements in synthetic biology, which I think will lead to just a dramatic ability for us to help people who are sick. And I think that, that, you know, there's probably no nobler mission than that. Phenomenal. Lemma, Alex, thank you for joining us for this live taping of masters of scale. Appreciate you both.

Lemma Nachman
Thank you. Thank you, everyone.

Jeff Berman
My conversation with these two leaders in AI makes me both hopefully and concerned. Intel fellow Lemma Nachman sees a future where AI is embraced as a tool to further human creativity, not just replace human tasks with automated ones. But AI systems will only augment work and further ingenuity if they are designed to do so. On the privacy and safety front, Alex Wang of scale AI notes that it's a big business opportunity. And he's right.

Tools that safeguard personal data for AI users or successfully detect deepfakes will add enormous value to the market. But the tech industry doesn't exactly have a great track record when it comes to protecting privacy and ensuring security. And we'll need market incentives to stay ahead of the safety risks. We know for sure that government regulation alone will not be enough. AI will be even more transformational than social media, and it's likely better framed as bigger than the industrial revolution.

It's developing much faster than any sweeping change we've seen before.

The founders and leaders who stay mindful and run ahead of these risks are those who stand to best serve their customers and our nation through this time of massive change. I'm Jeff Berman. Thank you for listening.

F
And now a final word from our brand partner, Capital one business.

G
Throughout the day, text messages and emails kept pouring in. Whatever you need, just let us know. We're back one more time with Aparna Saran of Capital one business. She was telling us about a Sunday morning email she fired off in a moment of panic. Minutes later, her inbox was overflowing and the support she found wasn't just emotional, it was practical.

We talked about detailed contingency plans and we created our go to market strategy. Before we are in full rollout mode, we had stage gates so that we could test and quickly learn and iterate. And within a matter of six months, as we were rolling things out channel by channel, those stage gates would allow us to pivot if we saw something that we didnt like. That day, Aparna learned a lesson that stayed with her having multiple plan BS doesnt just expand your options, it gives you new opportunities. The best way to pivot is actually open doors for thoughtful conversations, because humility in knowing that you actually dont know everything, as well as the empathy in knowing that disruption is always drastic and abruptly helps you go through that pivot with other people in a very different way.

F
Capital one business is proud to support entrepreneurs and leaders working to scale their impact from Fortune five hundred s to first time business owners. For more resources to help drive your business forward, visit capitalone.com businesshub. That's Capital one.com businesshub.

Alex Wang
Masters of Scale is a weight watt original. Our executive producer is Eve Tro. The production team includes Chris Ota, Tucker Ligurski, Masha Makotunina, and Brandon Klein. Mixing and mastering by Aaron Bastanelli original. Music by Ryan Holliday.

Our head of podcasts is Lital Milad. Visit mastersofscale.com to find the transcript for this episode and to subscribe to our email newsletter.

H
Hi listeners, it's Anya Profumo, a producer on Masters of Scale. Scaling your business is no easy task, especially when it's so important to really nail the small details, like how do I write an email that gets my point across? How do I easily convey complex information? Or, more importantly, how do I manage my reputation? All of these skills are crucial to taking your business to the next level.

But you're not alone in asking these questions. Every week on think fast talk smart, join Matt Abrams, a lecturer in strategic communication at Stanford Graduate School of Business, as he sits down with experts in the field to discuss these real world challenges. So head to your podcast player and search for think fast talk smart for the best tools, techniques, and best practices to help you communicate more effectively.