Primary Topic
Reid Hoffman discusses the intersection of AI technology and global efforts to distribute its benefits equitably, exploring significant AI milestones.
Episode Summary
Main Takeaways
- AI technology has achieved levels of knowledge representation previously unimagined, demonstrated by its ability to pass advanced academic tests like the AP bio exam.
- The development of AI is not just about increasing computational power but enhancing cognitive capabilities in specific, beneficial directions.
- Important AI milestones are being surpassed, with significant implications for future technologies in various sectors including medicine, science, and everyday life management.
- Geographical equity in AI development is crucial, with efforts needed to distribute technological advancements globally, avoiding concentration in traditional tech hubs.
- Reid Hoffman advocates for a balanced approach to AI skepticism, focusing on constructive criticism that leads to safer and more effective AI applications.
Episode Chapters
1. Introduction and AI's Promise
Reid and Aria introduce the episode's theme, discussing AI's potential to merge with humanity for a brighter future. Reid Hoffman: "Too often when we talk about the future, it's all doom and gloom. Instead, we want to sketch out the brightest version of the future and what it will take to get there."
2. Demonstrating AI to Bill Gates
Reid recounts a significant demonstration of AI's capabilities to Bill Gates, focusing on AI's ability to understand and interpret biological data. Reid Hoffman: "We arranged a dinner at Bill's house in Seattle...and showed him the latest technology that could pass an AP bio exam."
3. Future AI Milestones
Discussions on upcoming AI goals and challenges, particularly in terms of reasoning and general cognitive abilities. Reid Hoffman: "There's a whole stack of things that will be important milestones as we get there in the development of it."
4. Geographical Spread of AI
Reid emphasizes the need for spreading AI technology equitably across different regions to ensure global benefits. Reid Hoffman: "But we're better off the more locations and the more places we have for doing this."
Actionable Advice
- Embrace AI as a human amplifying tool: Utilize AI to enhance human capabilities in fields like education, healthcare, and entrepreneurship.
- Promote geographical equity in AI development: Support initiatives that distribute AI technology and expertise globally, not just in established tech hubs.
- Engage in informed AI skepticism: Focus on constructive criticism that identifies potential issues and offers solutions to guide AI development safely.
- Foster network effects globally: Build connections between established tech regions and emerging markets to share knowledge and resources.
- Advocate for responsible AI usage: Support policies that ensure AI is used ethically and beneficially, minimizing risks associated with misuse.
About This Episode
Reid recounts the story of a milestone GPT-4 demo at Bill Gates’ house—and shares what series of achievements by AI would most impress and excite him next.
Aria also asks Reid about the investment in technology and AI that he’d like to see be made at the domestic and international level. Lastly, Reid addresses the skepticism people have about AI—and how to best direct that skepticism to the benefit of people.
For more info on the podcast and transcripts of all the episodes, visit https://www.possible.fm/podcast/
People
Reid Hoffman, Aria Finger, Bill Gates, Sam Altman, Kevin Scott
Companies
OpenAI, Microsoft
Books
None
Guest Name(s):
None
Content Warnings:
None
Transcript
Reid Hoffman
I'm Reid Hoffman. And I'm Aria finger. Too often when we talk about the. Future, it's all doom and gloom. Instead, we want to sketch out the brightest version of the future and what it will take to get there.
Aria Finger
We're talking about how technology and humanity can come together to create a better future. This is possible. We love the community that we're building with this show. And we hear you. You've asked for more of Reid's takes.
So starting this season, every other week, Reid will be in the hot seat. I get to ask him a few questions in the spirit of the previous week's episodes and get his thoughts on the latest on AI, technology and the future read. I am so excited for our new segment where we get to turn the tables and I get to ask you some questions related to the topic of the day. So last week we spoke with Kevin Scott, such a brilliant technologist, humanist, and we got to talk to him about both the sort of geography of jobs and AI and how do we make sure that tech is spread out equitably, but also sort of the origins of AI and how people are adopting this at different speeds? You know, you and I have been talking to experts in AI, and we're so excited about what's to come, but a lot of people are just learning about it, and there's a lot of skepticism.
So I want to ask you a question about another technologist, who you are obviously work closely with Bill Gates, and he was very excited about AI but potentially skeptical about this approach. You know, would this approach be able to give you the gains intelligence that we were looking for? And one of the things he said was, once we have a tool, they could pass an AP bio exam. You know, that's interesting. That's a level of intelligence that I'm excited about.
And so I think you, Sam Altman and Kevin Scott did just that. You went by his house and showed him this latest technology. They could pass an AP bio exam. So tell us about what was that like? So Bill obviously has been a great.
Reid Hoffman
Advocate of the importance of AI for years in advance of this current revolution. And part of it that he, super. Smart guy was saying, look, one of the things that's really important is to. Be able to have knowledge representation of. The world and so forth.
And so he was initially kind of throwing out some interesting challenges to the large scale language model approach. But one of the things that's great about Bill, he learns and updates intensely. And so part of the dialogue was. To say, look, if you could show that it could read a set of. Biology textbooks and pass an AP bio.
Exam, then that would show that it would have sufficient knowledge representation. Even if you can't point to the symbols in the computer that do it, that would reflect that. And we're like, oh, well, we think we can do that. So we went off, and as part. Of training GBD four, we did not.
Train it specifically on AP bio exam. We just trained it on the wide range of all textbooks and a bunch of other things. And so we arranged a dinner at Bill's house in Seattle, had, you know, a stack of OpenAI people. The person presenting was Greg Brockman. Set of folks around the Microsoft Satya was obviously there, but some other executives, like Rajesh and Charlie Bell and others, and Kevin, obviously.
And so we went through it and we actually even had this woman who was one of the biology exam olympiads there to help kind of ask the. Questions and parse it and evaluate what we're doing. And we started going into the showing the demo, and when I asked Bill at the end of that, I said, so where does this rank in tech demos that you've seen? Bill said to me, he said, well. There'S only been one other that might be as good as this one.
And that was when I was demonstrated. The graphical user interface at, you know, at Xerox PaRC. Right. And so it's at least that, if not even better. It was a epic room that I think we all felt privileged to be in the room for.
Aria Finger
And do you have a similar sort of test now? Like, what is your AP bio exam? Is there something that you're like, man, if AI could do XYZ today or by the end of the year that you would be similarly excited for? Like we're passing by these milestones. We're like blowing past them.
Is there something you're excited about? But theres a whole stack. And they range from some things that are kind of more pedestrian, which is thinking about things like inflection and pie, which is, can it remember a sequence. Of actions and execute a plan of being your agent out in the world? Like, hey, im going to Rome, book.
Reid Hoffman
Me a good tour of the Vatican Museum and that kind of stuff as. A way of operating. And theres a whole stack of stuff in that. Theres also memory and personalization of remembering you and what matters to you. And so that's all, I think all.
This stuff will happen, but there's a. Bunch of things that will be important milestones as we get there in the development of it. Then there is a stack of things. That we think, okay, high probability of accomplish exactly when. So, for example, a lot of what people are working on right now is.
Reasoning and general reasoning capabilities, because part. Of what you see is you can. Break these things by getting them to, they don't understand when they're making foolish. Mistakes like around prime numbers or other. Kinds of things, and an ability to.
Kind of navigate that. And I think some more general reasoning capabilities, which will improve capabilities for the cognitive industrial revolution. And then you get to the next level up, which is things that you have a good possibility to do, but they are hard and will spake specific work, drug discovery and other kind of biological sciences and so forth, which there's obviously good work going on with protein folding, with isomorphic and Baker fold and other kinds of things, but there's also. Going to be some very specific work. That will make some amazing discoveries.
And then the next level beyond that. Is, well, could it start doing things. That basically we currently don't see a line of sight for doing, but could do amazing things? Like, for example, could it help us. With the invention and creation of fusion power, or could it discover new branches of mathematics or make some intersections between different scientific fields because there's so much.
Density of information, it goes well outside. Any, even one genius's head, but to multiple people's head, and then pulling that together, and that's unclear. The probability of that, obviously, you go. Well, but we're still increasing cognitive capabilities, only a matter of time. It's like, well, not clear, because you.
Could increase cognitive capabilities infinitely for the. Next hundred years and still not get that right. It's how you're increasing the cognitive capabilities. And that's one of the reasons why. People, frequently, both the proponents and the critics, can be a little hyperbolic and histrionic in either direction, because they just go, it's increasing IQ.
You know, it's like, well, no, it's increasing set of cognitive capabilities, some of which already today are superhuman and amazing, and we will continue those. But, like, what set of it really depends on how, like, will it be. Creating new science or not, new physics. Or not or other? Well, one of the things we talked a lot about with Kevin last week was geography.
Aria Finger
You know, he grew up in a rural area and obviously made, you know, made his way to Silicon Valley. And I have to admit, before I started working together with you, I don't think I really understood the magic of Silicon Valley. And I will admit that there is so much magic there in the network and in the helping each other and sort of the deep concentration of talent. But I also know that you care deeply about equity and making sure that these new technologies are sort of spread out evenly to everyone. So are there certain geographies, city, state, county, international, where you would like to see more investment made?
When it comes to technology and AI, how can we spread this out? Is it geographic investment that's needed or something else? Well, there's a stack of things. I mean, people obviously in the industry like to talk about network effects, and regions have network effects. Effects.
Reid Hoffman
And by the way, regions have network effects like Hollywood or New York for media. And there's these network effects because it brings in talent. It brings in all the necessary resources. For creating the next level, the next evolution of projects in this. And Silicon Valley is obviously one of.
The great lights in the entire world for what happens here technologically. But we're better off the more locations and the more places we have for doing this. And the way to do it is. A little challenging because you do get these intense network effects. Like if people say, I want to.
Move somewhere to maximally create an AI startup in the world, Silicon Valley is. A good choice today for that. But by the way, London is not a bad choice. Paris is not a bad choice. And so there's a stack of things.
And what you're trying to do is build that up. So some of that's investment in the area, some of that's government policy, some. Of that's like one of the things that Macron has done very smartly in. Paris that I think has helped with their AI. Thing is saying, hey, if you bring your technology experience back, Silicon Valley, other places and come here, you'll have a tax advantage status for coming back and working here to try to bring talent back in things.
And obviously, when talent's there and can build amazing companies, global capital follows. And obviously also you need high expertise. And, you know, there's a bunch of great technical schools in France, you know, and obviously also in London, Oxford and Cambridge, and, you know, other places for this. And all of those things play in. Now, the one kind of thing that I tend to always emphasize, and that's part of the reason I talked about Macron's genius gesture here, is you always want to be building off the network, but how do you bring, how do.
You extend the network? So it's like, how do you make. Connections between Silicon Valley and Paris? How do you bring the talent that. Has learned a bunch of stuff in.
Silicon Valley and have company formulation in Paris? And so, yes, you want capital? Yes. You want investment? Yes.
You want government policy? Yes. You want immigration stuff? Yes. You want startup friendly to be able to take bold steps and move things and to make an effort and innovation without having to cross prove your possible innovation benefit 15 different ways before you do anything, et cetera, et cetera.
All of that's important, but you need to be building on the network and leverage as much of a network as you can. I think probably the first time I had that observation and started doing that was when I joined and helped a. Set of efforts in the UK. First Silicon Valley comes to Oxford, and. Then Silicon Valley comes to the UK.
With Sherry Kutu in order to be bringing that network building, which brings a. Proxy of the strengths we have in. Silicon Valley to help elevate other geographies as well. Yeah. No, I mean, I love it.
Aria Finger
So, speaking of AI, you're always gonna get people being skeptical. They were skeptical in the early days. People are skeptical now. You are not a skeptical. What do you think is the most misplaced skepticism of AI and why?
Reid Hoffman
Well, as you know, I beat the optimistic drum very loudly because the vast majority of people think that they're being helpful and clever by articulating their skepticism. And actually, in fact, I think most people's articulation of skepticism is actually harmful for humanity and so forth. And not because they should, they should. Be quiet, but it's like, do the work to articulate your skepticism in a. Way that helps you build something that's good.
The whole point is, we're trying to. Get to a really amazing thing and trying to navigate our way there. And so the question is to say, well, what are the most important things that might go wrong, and what are the possibilities of how to navigate around those? So the question is, talk to the. People who are trying to talk about.
What to do with this and help shift to, here are specific kinds of. Things that you need to be doing. So the kinds of things that I advocate are, we have many, many years of this being a human amplifying technology. So the question is, how do we amplify essentially the right humans, like, say, doctors, educators, entrepreneurs, creating products and services, et cetera, et cetera, to do great things for human beings and less for human beings who are being destructive criminals, cyber criminals, terrorists, rogue states. And obviously, the engineer tends to be like, let's try to create the tools so it can't be used for bad.
And obviously, guardrails and safety on the tools is good. But ultimately, human beings, it's like we don't say, hey, we'll make the nuclear. Bomb self determining about when it's going to go off or not. We actually put it in selective hands, and that's an extreme example, but it's. Kind of like the, okay, what are the things we do to make sure that the set of hands is in.
None of the critically bad and then as many of the good as possible? Another one is you go, well, we're. Working very fast in building this technology. Are there any areas where we could possibly be putting a runaway train out there? But you have to look at what.
The areas of those possibilities are. For example, when people say, well, I would like to set up AI without human beings in the loop. In the following thing, you go, well, go look at Doctor Strangelove or war games. Let's have relatively little autonomous major systems. Completely controlled by AI until we understand.
What the systems go at a very high level of probability. That I think is a general goodness. And kind of a principle. And so, like, what are those areas that you should be cautious about? And it's one of the reasons why.
Like, for example, one of the things. I've been doing over the last, you know, at least eight years has been arranging five hundred and one s of universities and a Vatican working group and governments and everyone else to pull together key leading developers, including a lot of commercial labs, to say, look, let's share information on how to make this aligned well with very positive human outcomes and how to avoid potential destructive elements, whether. It'S humans using it or accidents or. Other things, by sharing that kind of information. But like safety protocols and awareness of how each other are thinking about it so they can challenge each other and say, hey, are you doing well enough here on what your even potentially social impact might be by releasing this technology?
Have you done some red teaming? Have you done some testing, et cetera? And not surprising for you and other people who know me, it's a classic network thing to increase probabilities of very good things and decrease probabilities of bad things. I love it. I mean, to your point about total network building, it sometimes feels like these camps are separate camps and never the two shall meet.
Aria Finger
And they don't talk, they don't speak the same language. They both think they're doing the right thing. So sometimes just getting them in the same room is so critical. Reed, thank you so much. Really appreciate it.
Reid Hoffman
Aria. Always fun. Possible is produced by Wonder Media Network. It's hosted by Aria Finger and me, Reid Hoffman. Our showrunner is Sean Young.
Possible is produced by Katie Sanders, Edie Allard, Sarah Schleyd, Adrian Bain and Paloma Moreno Jimenez. Jenny Kaplan is our executive producer and editor. Special thanks to Asurya Yalomanchili, Sayida Sepieva, Ian Ellis, Greg Beato, Ben Rellis, Parth Battil and Little Monster Media Company.
Aria Finger
Special thanks to Asurya Yalomanchili, Sayida Sepieva, Ian Ellis, Greg Beato, Ben Rellis, Parth Battil and Little Monster Media Company.