Is Artificial Intelligence Like Taking the Red Pill or the Blue Pill? | CMO Confidential | DJ Patel

Primary Topic

This episode explores the complexities and potential of artificial intelligence (AI), comparing its transformative impact to choosing between the red and blue pills in The Matrix.

Episode Summary

Host Mike Linton and guest DJ Patil dive deep into the state and future of AI, discussing its applications and the hype surrounding it. They emphasize the rapid development of AI technologies, particularly generative AI, and their impact on various sectors including marketing, legal, and customer service. The conversation also touches on the risks associated with AI, such as data integrity and the generation of misleading information. DJ Patil provides insights from his extensive experience in data science and discusses AI's potential to enhance decision-making and operational efficiency in businesses.

Main Takeaways

  1. AI's development is ongoing and transformative, likened to choosing between the Matrix's red and blue pills.
  2. Generative AI is making significant strides but comes with challenges like data integrity and hallucination risks.
  3. AI applications are potent in summarization and routine tasks but less reliable for creative or novel content without oversight.
  4. The regulatory landscape for AI is evolving, with significant implications for its deployment across sectors.
  5. Businesses must engage with AI technology actively to stay relevant and leverage its benefits effectively.

Episode Chapters

1: Introduction

Mike Linton introduces the podcast and the topic of AI, highlighting its controversial nature and widespread impact. DJ Patil: "AI has the power to revitalize or destroy industries."

2: The State of AI

DJ Patil explains the evolution of AI from simple machine learning to today's advanced generative models. DJ Patil: "We are now seeing AI that can perform tasks with a level of creativity and efficiency previously unimagined."

3: Practical Applications and Challenges

Discussion on where AI is currently effective and its limitations, particularly in generating reliable content. DJ Patil: "AI excels in structured environments but can falter when asked to generate entirely new ideas or content."

4: Future Outlook and Ethical Considerations

The conversation shifts to the future of AI, including its economic impact and ethical use. DJ Patil: "The path AI will take is still uncertain, with potential for both great benefit and significant risks."

Actionable Advice

  1. Explore AI Capabilities: Engage directly with AI technologies to understand their potential and limitations.
  2. Stay Informed on AI Developments: Keep up with the latest research and discussions in AI to anticipate changes in the technology.
  3. Implement Data Integrity Measures: Ensure the accuracy and consistency of data used in AI systems to avoid errors and biases.
  4. Adopt AI Responsibly: Consider ethical implications and societal impacts when integrating AI into business processes.
  5. Educate Your Team: Provide training and resources to help your team understand and effectively use AI.

About This Episode

A CMO Confidential Interview with D J Patil, a Great Point Ventures investor and former U.S. Chief Data Scientist in the Obama Administration. D J discusses why everyone should "Take the red pill," his belief that AI will accelerate at speed, and why you shouldn't delegate this responsibility to a single person or team. Key topics include: why "boring, repeatable problems" offer the best current use cases; how AI can move you from creator to editor; how the DoD is thinking about AI; why failing now allows for success later; and why many marketing databases are not ready for prime time. Tune in to hear how generative AI started with a database of cat photos.

People

Mike Linton, DJ Patil

Companies

Nvidia, Klarna

Books

"Impromptu" by Reid Hoffman and ChatGPT

Guest Name(s):

DJ Patil

Content Warnings:

None

Transcript

Mike Linton

The CMO Confidential Podcast is a proud member of the I Hear Everything podcast network looking to launch or scale your podcast. I hear everything delivers podcast production, growth and monetization solutions that transform your words into profit. Ready to give your brand a voice? Then visit ihereverything.com dot. Welcome to CMO Confidential, the podcast that takes you inside the drama, decisions and choices that go with being the head of marketing.

DJ Patil

Hosted by five times CMO, Mike Linton. Welcome marketers, advertisers and those who love them to chief marketing officer Confidential. CMO Confidential is a program that takes you inside the drama, the decisions and the politics that go with being the head of marketing at any company in what is one of the most scrutinized jobs in the executive suite. I'm Mike Linton, the former CMO of Best Buy, eBay, farmers insurance and ancestry.com, here today with my guest, DJ Patel. Today's topic is AI, like taking the red pill or the blue pill.

Mike Linton

Now, DJ held leadership analytics positions at eBay and LinkedIn and served as the chief data scientist for the United States of America. He's a senior fellow at Harvard and also an advisor and investor. Full disclosure, we worked together at eBay where DJ was in Star Trek in terms the spock on the bridge. This is his second time on the show, and today we are using the Matrix reference to update our listeners on a topic that just keeps on going and going, artificial intelligence. Welcome, DJ.

DJ Patil

Oh, thanks. Glad to be back. Great to have you. An excuse to hang out. Yeah, there we go.

Mike Linton

We like. So, okay, first question now, DJ, where are we really in AI adoption? I mean, the market is valuing Nvidia at 2 trillion with a T. Every company is saying they have AI, which we know we don't believe. And AI is coming for everybody's job.

It's going to revitalize everything and it's going to destroy the world all at the same time. Give us a take. What's really going on? Yeah, so I think, first, to put this in context, AI has been going on for a long time in various forms. And so this is the latest form of AI.

DJ Patil

We saw the early days of people trying to just make computers come up with rule sets, ways to come up with insights. We realized, oh, wow, we need a lot of data, we need a lot of compute. Then we were able to kind of go into this phase of machine learning. This is really where Mike, you and I were really starting to get to work on these problems together. Was like, hey, why can't I target better on the marketing side, why can't I figure out and have these systems adapt to fraudsters that are trying to take advantage of your systems?

How do we get the data, the compute, all that stuff to work together? And then we moved to this major breakthrough that happened because we started to get the large datasets, the most notable being what was called imagenet, which is this giant catalog of cat photos, effectively, plus, plus. And people were able to say, whoa, we can train this to come up with new insights around computer vision, other type things. This is a bunch of Fei Fei Li's work, other people. And that first systems were what's called deep learning.

That was Jeffrey Hinton out of Toronto, that his students. And you did say cat photos, did you not? But it was. It was like, I get it. I just wanna make sure everyone heard that right.

Exactly. It was like a lot of that. And then we've seen this latest breakthrough, which is around what people refer to as generative AI. And this is using these new technologies, referred to as transformers and other similar techniques to really create these breakthroughs. The surprising thing and the most exciting thing about this is that when you give this new form of AI instructions, or what we refer to as prompts, we get back really powerful, creative, almost like eerie at times.

Like, wow, that was weird. And that could be everything from trying to create a new cat photo on one of these image type systems that are using something called diffusion models, or they might be using large language models to kind of create some type of poetry or some other type of summarization of work. So we're seeing a ton of this that's taking place. And what's been really exciting is we have never, in humanity's time period, seen this amount of energy. The raw calories of people working on one lane of technology has never occurred.

And so because of that, what you should expect is massive breakthroughs that are going to continue in this area. But as I'm sure we're going to get into, there's plenty of hype in here as well. Yeah, a lot of hype. And I appreciate that explanation, especially now everyone knows a lot more about AI and cats than before they start listening to the show. So let's talk about the next few years.

Mike Linton

How is this going to play out? Can these valuations actually be real, or is this. Is this too much hype? And when does this actually translate into true sales and profits? What's your take on that?

DJ Patil

Yeah, so there's a few ways to think about this. The first is, I think the useful thing is to ask, where is AI applicable today? Like, where can you use it today? Where are the limits of it? And so some of the things that it is excellent at doing is summarization.

I can take a whole lot of articles, I can take a whole lot of different things, and I can say, explain this to me. Explain it to me like I'm a five year old. Explain it to me like I'm a 13 year old. Explain it to me. If, you know, pick up your favorite criteria here.

Yeah, that is awesome. If you want to write a letter of recommendation or a rejection letter and you just say, hey, look, here's the bullet points. Give. Write me an explanation, or give me that recommendation letter or rejection letter, it's going to crush that. It does superbly well.

Where it starts to really go off the rails is asking it things where you don't fully know what's going on. And this is what's often referred to as hallucinations or other type of issues that start to show up. Suddenly it's making things up. The famous examples being, you know, people have been using it in to write legal briefs, and then it makes up a whole bunch of cases. I love that.

Mike Linton

It makes what you have to remember written by cats. Exactly. Well, interestingly enough, the way to think about this technology is what it does is it takes this huge amount of data and then it kind of stumbles forward. Like if you ever kind of, like caught your toe on a tree branch or stubbed it on something, and then you kind of stumble, stumble, stumble and try to recover. Think of AI like that.

DJ Patil

It's sort of stumbling forward. Statistically, it's like, next word, next word, next work. And so if it's going on in the wrong direction or it doesn't know how to look at things holistically, it looks at things very narrow. This is interesting. We had Tom Goodwin on the show, who's a futurist, and he said he called the current state of AI the equivalent of having 1 million cocky 16 year old interns working for you.

Mike Linton

Helpful, but not really responsible enough to be trusted. We've seen the whole Gemini thing. We've seen the Air Canada promotion that was just so wrong. Tell us about that statement about the. It's not responsible enough to be trusted.

And then some of the mistakes we've seen here on both general Air Canada. So there's a couple dimensions on this, because, like, one of the things that is there, Eric Brynjolson, who's an economist at Stanford, has actually shown this also in call centers that if you take brand new people to the call center and then you give them an AI assistant. And so that's kind of like a copilot esque situation. Their performance jumps radically because they are. Learning and being trained at speed.

At their own pace, too, because it's personalized. Right? Personalized and guardrails also. So it's going to prevent you from doing stupid stuff. Also, if you take the most seasoned workers, what it does is it allows them to be more efficient.

DJ Patil

It gives them a bunch of macros, if you will, to be faster at doing things. But there is a performance lift there, but not as much. And Klarna just released this result about how much they were able to save. I don't remember the exact numbers. Klarna is a pay overtime company, right?

Exactly. Yes. It's like you get to distribute your payment structure kind of over a period of time. And. But what they were able to do in their call center is they're saving extraordinary amounts of time because people are, they're able to assist people by getting to the right information.

So we are seeing very real material things, not just, but those are all. Repeatable, structured answers versus any blue sky answer, because what you have is massive databases. There's where you also have only so many paths and only so many answers, which would, would also follow the 1 million cocky 16 year old interns. They can figure that out. Is that right or not?

Well, so here's the way I would think about it.

Places where you have stupid, boring, repeatable problems, where it's just like you're answering something over and over again. It's like, you know, the equivalent of, like, you go into a conference center, you're like, where's the bathrooms? Like, those type of problems are really done well. Think of like, where one person takes a piece of paper, hands it to another person, and they just sort of form fill it out, and the forms are just in different spots. It's going to do really well there.

Where it doesn't do well is, you know, creative assignments right now, like, where you're like, hey, give me something brand new, novel. Literally create something with narrative that it doesn't do well at the moment. Now, there is a view, and what people call sparks of generalized artificial intelligence is that it does give sometimes really spectacular, surprising results. And so those things, those insights that come out of this are some of those. So as an example, Reid Hoffman wrote a book called Impromptu jointly with chat GPT.

And it was like, hey, write these things. So you can imagine, like, I did this experiment where I said, I took something very complicated and I said, let me pretend I'm going to write an essay or a memo to the president. I gave it a prompt saying, I need to write a three page memo for the president explaining healthcare policy in the United States. Explain an issue that is 20% of GDP. And so it gave me this answer.

And what I was like, oh, this is an interesting starting point. It's missing this and this and this and this. And here's the key thing. Most of the time, we have been trained to think of the world like a query like Google. If we don't like it, we erase it and then we start over in these systems.

I say, hey, you forgot about this and you forgot about this, and you emphasize this too much. Go back and try again. And so it does it again. What it's done now is, and it. Doesn'T even have any hurt feelings when you talk.

Doesn't have any hurt feelings. If anything, it's like, it's not grumpy and it's like, I got to work late, there's no pizza in the office. Instead, what it does now is it allows me to not be creator. It allows me to shift to editor. And so that is one of the most powerful things, especially around marketing.

And what we see also on the image creation is we're able to shift modalities to say, look, we're not going to be precise, we're not going to nail it, but if you start, I'm not going to be dealing with a blank page. I've got some stuff there, and now I can iterate. But how about Gemini and then the Air Canada thing, the two massive public mistakes. What happened there? Well, so this is the part, like, what's in the data, these are the natural results that are in there.

What people are trying to do to prevent these things is there's two things that they're really trying to do right now. One is this thing of how do we, what's called alignment, how do we align the model with our values? And the other is something that's referred to as, some people will call it rag, which is this basically a way rag architecture, which says, hey, I have a bunch of the content already. I'm going to go look internally for those documents and service those. So think of it as a combination of AI plus search, okay?

And what that does is it says, I already have the answers. Let me tee those up together to give you the right thing. The problem that is there is if you open up these systems to everything you're going to find places where it breaks. So the question is, how do you put appropriate guardrails on this? Something we haven't talked about.

So I'm the co chair of this task force for the Department of Defense on generative AI, trying to figure out what we should do here. And some of the things that are really fascinating about this right now is the DoD and the government is actually thinking about this in a really sophisticated way because what they're asking is like, hey, it's not one size fits all, but certain problems are really great for AI usage. Other problems, no way. Like, we cannot even get, like, what's. An example of a no way that you can tell?

Well, so the no ways are making an assessment about something you could might imagine, like targeting systems or weapon systems or things where very clearly the Department of Defense policy right now is there has to be a human in the loop. A human has to be in the loop before you let. Autonomous weapon systems is one of the clear examples where now most people think about the Department of Defense, they think weapon system, weapon systems, they forget that some of the biggest problems actually are like, how do we move fuel, water, supply, supply chain. It's like supply chain plus 1000 x of what everyone else is dealing with. And so those problems, much of the algorithms for that were designed in the 1960s, and they haven't really, like, we haven't had major breakthroughs.

Now with planning or ideas, you can plan your itinerary for spring break or summer vacation. And my wife even is a heavy user of this where she's like, look, why should I go to 15 websites that are there in Google or wherever, whatever browser you're using, a search engine you're using. She's like, I'm just going to start here and then say, oh, now this is giving me a list that I'm going to use as a jumping off point. Yeah. She's like, hey, look, okay, this is itinerary.

No museums because one kid's like, I don't want to go to any museum. Plan a whole day only with museums because the other kid only wants to go to museums. So we're able to use this as a discussion forum that then allows us to do it. Is the system going to know, like, hey, that museum is closed for renovation? No, but it allows me to kind of construct something that I can now then go test and figure out what is right.

Mike Linton

I'm just thinking how great the family dinner conversations are, is there? And hopefully there's no PowerPoint involved when you're putting these family vacation choices so. Well, this gets actually, you know, it's funny you say this because, so to give you a sense, so, like, I'm pretty confident my kids are not going to see this. So because like, so here's something that's real that shows how hard this is. So my son just got busted for using chat GPT in school, so he's using, you know, and the question that was there, which shows how poorly we are ready to handle the situation, was they were like, look, you can't use chat GPT because we're afraid you're going to plagiarize.

DJ Patil

And he's like, well, but I used it to brainstorm. And they're like, well, that's using it. And he's like, but what if I just use Google to brainstorm? Why is that allowed? And so really quickly the 17 year old blew a hole right through the whole school policy because it's kind of the equivalent of like that moment in 2005 or pre 2005 where it was like, is Wikipedia allowed or are you only allowed to use encyclopedia Britannica?

And in 2005, the reason this changed is somebody did a comprehensive test and showed Wikipedia and encyclopedia Britannica were about equal at that point. Okay, so while we're on it, because I want to flip this over to marketing in a minute, but how is the government really going to regulate or attempt to regulate this business? And even if they want to, can they really get this genie back in the bottle? And, you know, there's all these predictions, there's like a 17% AI scoring to destroy the world, which I love that someone is predicting that, but hopefully they. Didn'T use AI to just create that.

Mike Linton

What'd you say? I said, I hope they didn't use AI to come up. AI created. There's a 0% chance I'm going to destroy the world, but okay, how's the government going to handle this? And will they really actually be able to do anything?

DJ Patil

Yeah, so there's a few movements that are taking place I think that everyone listening or watching should pay attention to. First, at the large scale, we have the nation state scale, we've got a bunch of moving paths. You know, the EU is doing one lane of work, we have the US doing another lane of work, and other countries are trying to think about what's going on and I'll get back to that in a second. The second layer is you have a whole bunch of states that are talking about regulation. So you have Texas that's doing some stuff.

California's got a bunch of proposals on the table, and so we could have even more fragmentation here across the board. And then third is we have to remember that this is an incredibly tough area of national security. So it's kind of like the best version is your best defense is offense. Right. And so how do you kind of sprint forward?

Because you've got all sorts of kind of national interests at stake. At the federal level, that's kind of two dimensions. The US federal level, there's two dimensions. One is the president has signed executive orders that kind of say, how should regulation, like, how should the federal government without legislation actually do things? And that is pointing, that's starting to shift and bend the arc to saying, look, we need to think about trust, we need to think about alignment.

How do we actually have verification systems? All a bunch of really good stuff in there. Congress is a little bit stuck and trying to figure out what to do, and we need to be very careful with that because if you over regulate, you also, you've taken away your ability. You go on defense and you're on defense by all the other countries that don't take their foot off the offensive gas. Exactly.

Well, this is what's happened with GDPR in Europe is that you can't actually, a bunch of european countries have basically taken themselves out of the game in AI because you can't use the data effectively. So we have that problem. So it's very fast changing. It's evolving very quickly. And the other thing is the technology is really focused at today's version of AI.

I can't tell you what the next breakthrough is. Most of us at the transform, when the transformer, this version of what you see as large language models came up, when that first paper showed up, I think most of us were like, okay. And then when we started to realize, like, whoa, those are surprising results. What is going on? And then, you know, we're just a little over a year ago since chat GPT, you know, really went open and wide and everyone was like, what?

I can do those things. And similarly, we continue to be surprised at some of the special results. And I'll tell you one of the ones that's there, healthcare. Are we ready to do clinical stuff with AI where it's actually taking care of you? Not in my opinion, not from the people I talk to or work with.

But what we are seeing is a large number of patients who are using AI to actually get insights about what they've been diagnosed with. So they're conversing with it, they're getting value we are seeing physicians who are asking, how can I break this message or explain this complicated issue in a more emotionally better way? A little warmer. Yeah, a little warmer. Little warm, right.

Rather than just like, here's a science. And so that is telling us that gives you an indication of some of the value areas that we see. While we also have to make sure that the safeguard, because we don't want this hallucination type aspect happening in a clinical setting. No, it'd be awful. All right, let's write marketers into the story.

Mike Linton

Is AI here to take your job or improve it? And how do you keep the Terminator from taking your job? I'm just writing all kinds of science fiction movies. No, it's the question of the day. Here's the number one advice I have for anybody out there right now.

DJ Patil

If you're not like playing with these systems, you're going to miss out. The equivalent here is like, hey, look, the mobile phone showed up and you're going to ignore it.

And you weren't going to be on the BlackBerry. Like you were staying on your BlackBerry, you were missing out. Like you had to have your BlackBerry, your iPhone, your palm pilot, you had to start playing to get a sense of where things are going. And same that we saw with the sort of different versions of the web going from Web 1.0 to social media to this new form, you have to be playing with it. Two is learning how does these technologies, how can they benefit you as you go forward?

Where are the limits of this? Like, so for example, image generation, is that good for you in your world? Is it bad for you? What about some of these video elements just playing and create? You have to carve out space and time to do this.

Mike Linton

We talked about this a little bit on some earlier shows. It's also, don't delegate this to a person. That's right. Like everyone should be playing. But when you are looking at Mike.

DJ Patil

Let me pause because you said that. So important. This is so important. If you delegate this, this is that moment in mobile when it's like, well, I have my mobile guy. Yeah.

And then everyone who wasn't the mobile guy is out of a job. You have to be the mobile guy, you have to be the AI person. You have to lead from the front by playing with these things. Like to your point at the beginning, if you're not taking the red pill. Now, there we go.

With this amount of technology and seminal shift happening, you're going to get passed over. Yeah, I think that's, that's really good. And I think the other thing is, if you delegate this to a person or even a small group of people, the rest of the people won't keep up and they won't put it into the company. It will always be viewed as a special thing. So I think that's really important.

Mike Linton

When you are looking around, though, and you're looking at, at marketing applications, like actually out there and looking at. We had a whole show on vendor management and vendor management of AI. Any best practices you can share with our listeners? So the biggest one right now is, honestly, don't get locked into any vendor specific thing. It's too early.

DJ Patil

Like this is an area where you've got to have flexibility, any tooling or product area that sort of locks you in and prevent you from migrating. Like, what's the migration cost? Is the primary thing I ask, because a lot of this technology, a bunch of the early forms of this are likely to get integrated into other things or they are going to become commodities. Right. They're going to be the entry to play.

Exactly. And here's Zoom's AI thinking, I'm giving a thumbs up, right? So it just shows like we're in that very early stages. It's almost like that version. We're going to look back and be like, wow, that was how we used to do things, how antiquated and that felt.

Mike Linton

So when you look at this, what do you recommend or what are you investing in when you look at all the AI potential investments? And what's your advice on that front? Yeah, man. Nvidia. Yeah.

2 trillion is really big numbers. Exactly. A lot of people are trying to figure out how can we build these things. But we have a supply chain constraint world, and the people who make these chips are the ones that determine the success trajectory of companies. The other thing is you have to get your data in order.

DJ Patil

If you don't get your data in order, none of this stuff works. And too often, especially in marketing world, our layers of data are. Our data is so bad. We had a show, Sean Peters, who's really big in publicis on this, and one of the things he said is don't put AI on a pile of really unintegrated, stupid data because you'll be even farther behind and you won't have visibility. Right?

That's right. Because the biggest issue is when you've thrown into this giant AI system, it's very hard to get visibility about what happened. Why did it go wrong? In traditional systems, we can go in and debug it. We could be like, oh, here's where the error happened.

Or, oh, look, this data thing was corrupted. Let's go fix that. And then you rerun it and then it works here. It's like, I don't know. Yeah, you have no idea where it's.

Wrong and you have no idea. Exactly. The plumbing puts something out in your tub and you can't see any of the piping. That's right. So your fidelity of data, if you have a bunch of data that's really high fidelity here and really crappy over here, is it going to be biased to this portion of the data set?

Absolutely. And that's no different than our own decision making. When meetings happen and someone's like, well, let me show you all this great data. You naturally gravitate to that. So I have to call this out for all the marketers, which is, this is a super important point.

Mike Linton

If you don't really have a good data infrastructure and platform, if you stick AI on top of this, it's going to be such a big problem. That's right. And most marketers like, let's be honest, most of the it systems that would have been built, the technology has been built for marketing, is to get around the more traditional infrastructure that we have, that traditional infrastructure. The reason it is so rock solid, expensive, all the things that kind of are pain with it is because it's battle hardened to do, like the heavy lifting. Right.

To allow you to close the books every month and to do all this. But this does not incorporate, like, the call center data, the vendor data, a lot of the, a lot of the other. If you don't have this right, you will make a bunch of dumb decisions. Exactly. Like, it's very easy for things to go off the rails here if you're not careful.

Hey, so you gave a graduation speech at Maryland a long time ago in 2013, talking about failure.

We don't have time to go into all of it, but just give us the highlights about the relevancy of that failure discussion and what our listeners should take away from that. Yeah, it's actually such an interesting time because I'm just starting to formulate. I'm super honored to be able to give one of the commencements at the latest new college, first college in 50 years at Berkeley, the school of computation, computer data, science and society. Nice. So I've been thinking a bunch about this.

DJ Patil

So the Maryland one was really about the importance of failing fast. And, Mike, you and I live this like we're test it, iterate it, go. If it's not working, shift gears. The way I describe it to people is it's not just about velocity. It's about acceleration of learning for the math people out there.

It's about being first derivative positive and second derivative positive. It's the speed at which you're going. And, you know, I don't have to tell you about this, but for other people, you know, the version of it is like, we're not going to win on lap one. We're supposed to win on lap 15, because we get a little bit smarter, we get a little bit better, we get a little bit better, and so we compound, and by the time we get to lap 15, no one can catch us. And so what is your way in life?

To fail fast and iterate forward and find environments where. How do you construct environments? You're like, yeah, this is a failed mode. I'll give a very specific example that may sound totally crazy. So we went to the secretary of defense, Ash Carter, and we said, we believe firmly that there needs to be a new paradigm for cybersecurity and to protect ourselves.

The way we're going to do that is we are going to invite America to hack the Pentagon. Yeah, you can imagine the secretary's response at that point. Now, luckily, we did it on a plane where he couldn't leave or throw us out. But we said, sir, here's how we're going to do it. Smart.

We are going to craft this program. So we're going to call it hack the Pentagon. But really, you're not hacking the whole Pentagon. You're hacking this one small area that is safe. And by the way, if it turns out that people are like, oh, no, you hacked the Pentagon, guess what?

All the other nation states have already hacked the Pentagon. That's right. So this is a safe way to do it. And when we did this, like, people were hacking the Pentagon. This portion of the Pentagon is seven minutes.

Like seven minutes. And so we were able to then start, like, hardening this very quickly and take those lessons to the more sensitive things. So it's a way of saying, ah, we're gonna fit. Like, if case we screwed up. But I think this is so important for marketers in particular, because a lot of times they're putting out what I'm going to call binary efforts.

Mike Linton

It's a win or a loss. The Super bowl ad is not a fail fast moment. Yes, Super bowl ad is an excellent example, but there's a lot of examples was all or nothing in a marketplace that is too hard to have very much certainty in it, so I think this fail fast and learn as fast as possible. That's great. So, dj, we're at the end of the show and it's.

DJ Patil

Wait, wait, let me just say one more thing, because I want more. Here we go. So fail fast has to be your mantra in AI, because this is such a fast evolving space. I have no idea where we're going to be in a year. I have no idea where we're going to be in nine months even.

Mike Linton

Maybe that's why we're going to have you back in nine months. I'm happy to. And so part of this, as you try to think about your failure rate, is you have to get the subsequent teams, your partners in the marketing world, understanding that we have to go on this journey together. This has to be a we. And we're going to experiment, we're going to try things and we're going to find safe ways to test things out so we can learn and then build on it and then eventually get the real returns that we know are coming.

I think this will be when we come back at nine months. We can also talk about how smaller and medium sized companies should do this. But as we're at the end of the show, same question as always. For every show, you can pick one or both of these, but you have to pick at least one funniest story you can share on the air or practical advice we have not yet talked about. Pick one or both.

DJ Patil

Oh, man. Most practical advice. There we go. How am I staying on top of AI? You and I grew up pre pre, you know, word processors, pre cloud.

So the key is I hang out with really young people who know how to use this tech. And I'm like, hey, how did you do that? They're like, oh, they roll their eyes. They're like, man, like, how does that guy function? And I am watching them and I'm learning through them.

So hang out with really young people, your interns, your other people. Get out of your office and sit next to them and ask them how they are living and using these systems and they'll teach you. And then you can always fight with your high school on the chat. GPT rules. Exactly.

Mike Linton

All right, very good. Thank you, DJ, and thanks to everyone for listening to CMO confidential. Look for more of our shows on Apple, YouTube, Spotify and the I hear everything network, including marketing. The battle between believers and non believers. It was the best of times and it was the worst of times.

From a top executive search consultant. Why is b two B marketing so bad? And what to do about it. Parts one and two. And is the CMO position the hardest job in business?

If you are enjoying our show, please, like, share and subscribe. Hey, all you marketers, stay safe out there. This is Mike Linton signing off for CMO confidential.