Cybersecurity in the age of AI, with Steve Schmidt, Amazon's chief security officer

Primary Topic

This episode discusses the evolving landscape of cybersecurity in the AI era, featuring insights from Amazon's Chief Security Officer, Steve Schmidt.

Episode Summary

In a detailed conversation on GeekWire, Steve Schmidt, Amazon’s Chief Security Officer, explores the intricate relationship between cybersecurity and emerging AI technologies. Schmidt emphasizes that cybersecurity is increasingly about understanding human motivations behind attacks, not just technical defenses. The episode delves into how AI is reshaping the approach to security, from automating mundane tasks to enhancing phishing defenses, stressing the importance of maintaining human oversight in security protocols. The discussion also touches on Amazon's internal use of AI, including AI's role in simplifying complex security data for non-technical stakeholders.

Main Takeaways

  1. Cybersecurity is fundamentally about people, not just technology.
  2. AI is transforming cybersecurity by automating routine tasks and improving response strategies.
  3. Human oversight remains crucial in security, despite advances in AI.
  4. AI tools like Amazon's Codewhisperer help identify security vulnerabilities in coding.
  5. AI’s impact on cybersecurity is dual-sided, aiding both defenders and attackers.

Episode Chapters

1. Introduction

Todd Bishop introduces the episode and highlights recent cybersecurity developments. Steve Schmidt: "Welcome to GeekWire."

2. The Role of AI in Cybersecurity

Schmidt discusses the integration of AI in cybersecurity and its implications. Steve Schmidt: "AI does not replace human oversight but enhances our response capabilities."

3. Human Factors in Cybersecurity

The conversation shifts to the psychological aspects of cybersecurity, emphasizing the human elements driving security breaches. Steve Schmidt: "Understanding the human factor is key to effective cybersecurity."

4. Practical Applications of AI in Amazon

Schmidt provides examples of how Amazon uses AI internally to bolster security measures. Steve Schmidt: "AI helps us simplify complex security challenges into actionable insights."

5. Future of AI and Cybersecurity

The episode concludes with Schmidt's insights on the future integration of AI in cybersecurity strategies. Steve Schmidt: "The balance between AI advancements and human decision-making will define the future of cybersecurity."

Actionable Advice

  1. Maintain human oversight in AI-driven security systems to ensure accuracy and accountability.
  2. Use AI to automate routine and mundane cybersecurity tasks, allowing human staff to focus on complex decision-making.
  3. Continuously update and train AI models to adapt to new cybersecurity threats.
  4. Implement rigorous testing and validation processes for AI-generated security measures.
  5. Stay informed about the latest developments in AI to understand both its capabilities and its limitations in cybersecurity.

About This Episode

It was a big week for cybersecurity for Seattle's tech giants. Microsoft President Brad Smith was in Washington D.C., testifying before the U.S. House Homeland Security Committee about the Redmond company's security challenges. Listen for highlights at the end of the show.

Meanwhile, Amazon held its annual AWS re:Inforce cloud security conference in Philadelphia.The rise of AI has added some big new wrinkles to the issue of cybersecurity, and AI was one of the main topics in a conversation that I had a few weeks ago with one of the people who keynoted the AWS event this week, Steve Schmidt, Amazon's chief security officer.

People

Steve Schmidt, Todd Bishop

Companies

Amazon, Microsoft

Books

None

Guest Name(s):

None

Content Warnings:

None

Transcript

Steve Schmidt
The thing that I took the most out of my experience at the FBI was a focus on the people behind adverse actions. A lot of my career, for example, I was focused on russian and chinese counterintelligence. And if you look at the motivators for espionage in the classic world, they're exactly the same thing that are motivators for hackers right now. It's money, ideology, coercion, or ego.

Todd Bishop
Welcome to GeekWire. I'm Geekwire co founder Todd Bishop. We are coming to you from Seattle, where we get to report each day on what's happening around us in business, technology and innovation. What happens here matters everywhere. And every week on this show, we talk about some of the most interesting stories and trends in the news.

This was a big week for cybersecurity for our hometown tech giants. Microsoft president Brad Smith was in Washington, DC, testifying before the US House Homeland Security Committee about the Redmond company's cybersecurity challenges. Stay tuned for highlights at the end of the show. Meanwhile, Amazon held its annual AWS reinforce cloud security conference in Philadelphia. The rise of AI has added some big new wrinkles to the issue of cybersecurity.

And AI was one of the main topics in a conversation that I had a few weeks ago with one of the people who keynoted the AWS event this week, Steve Schmidt, Amazon's chief security officer. Steve Schmidt, thank you very much for joining me. Thank you. Really happy to be here. So you are the chief security officer at Amazon.

Steve Schmidt
And prior to that, you were in the chief information security officer role for AWS. But you've described your role, I've heard you say, as playing chess while practicing psychology, and not only playing chess, but playing multiple games of chess simultaneously. Can you explain more what you mean by that and elaborate on that a little bit? Sure. One of the interesting things about my job is a lot of people get confused often about security and information systems.

They think it's primarily a technology problem and it's not. Information security is really all about people, because if you think about it, well, at least right now anyway, we'll see what happens in the land of generative AI. But right now, machines don't attack each other. It's people telling machines to do things that cause problems. And people are motivated by things like money or ideology or coercion.

But the usual biggest one is, as you know, is ego. It's, I want to be the biggest, baddest hacker on the planet. And so my job is often about solving puzzles. Who's doing what? How does this little piece of information fit in amongst all the others while playing chess, meaning we've got to think a few moves ahead of the adversary and figure out what defenses do we need to build?

Where do we need to improve, where do we need to instrument? What kind of tools do we need to think up? Because the adversaries are always changing, and at the same time, you get to practice psychology. What motivates this person? What are they interested in?

What's their level of risk tolerance? What can I do to push the equation just far enough? Our side that we don't have a problem. Ego is so interesting, and not that security engineers playing defense don't have ego themselves, but I think you're right. It does drive that adversarial mindset, that black hat hacker mindset, probably disproportionately.

How do you take advantage of that? Sure, quite often you can foresee what these folks are interested in. In many cases, a lot of it is. If you're looking at money particularly, it's pretty straightforward. Where is the money?

Where the control points for the money. How do you control the access to that information? How do you prevent access to systems that contain the pieces they really need? With Ego, it's about following the intelligence. Who are the people who are fighting with other people right now?

Who are the ones who are trying to prove that they're the best in the planet? And what kind of techniques do they use? What do they prefer to do? Because people are creatures of habit, they tend to find something that works for them, and they use it again and again and again. That gives us as defenders something to look for and an opportunity to build tools and techniques that take advantage of that propensity of the human being to do the same thing that's been successful for them in the past.

AI is becoming so important throughout many areas of technology, generative AI specifically. And I know that you've seen this play out in a variety of ways in your field and in your work. Just looking at this from the outside, and I try to stay away from war metaphors in general, but it's hard to avoid it in this case. In terms of the technological arms race with bad actors, malicious hackers, it feels like things have escalated in the security field, from hand to hand combat and short range shelling with PCs and devices in the cloud, to the potential for the digital equivalent of nuclear war with AI. Am I overstating it?

What's the reality from the inside? I sure hope we're not at the nuclear war threshold. What I will say is that generator AI very definitely does enable attackers to be more effective in some areas. For example, crafting more effective phishing emails or crafting solicitations for people to clip on links or things like that definitely enable the attacker a lot more. But the thing that all of us have to think about, though, is that it can also able to defend her, because when we take advantage of generative AI, it allows our security engineering staff to be more effective.

It allows us to unload a lot of the undifferentiated heavy lifting that the engineers had to do before and to let them do that thing that humans are the best at, which is looking at the murky gray area and sifting through the little tiny pieces that don't seem to make sense, and putting them together in a puzzle picture that all of a sudden goes, aha. All right, I know what's going on here, and the interesting thing about it is that in most cases, when we apply generative AI to the security work that we have to do, we end up with happier security engineers out the other end, because ultimately they don't want to do the boring, laborious stuff. They want to apply their minds. They want to think about the interesting angles to the stuff that they're working on, the stuff that generative AI can't do right now. Thats interesting.

And I know youve talked about this in the past, but I think part of the temptation, especially in such a complex field like security, would be to turn over huge amounts of the work to AI. Youve talked about this importance of human involvement, humans in the loop, and the fact that you still need a line between AI and automation and humans and security engineering. Where is that line today, and how do you see it moving in the future? So models are imperfect, and that's something that a lot of people don't necessarily realize about generative AI. Generative AI model output is correct a lot of the time, but not all of the time.

And the work that we do, we have to be correct almost always, because the impact to customers otherwise would be pretty negative. So letting generative AI loose on something by itself will end up with problems for customers. It'll turn off somebody's account that shouldn't have been turned off, it'll delete data that shouldn't have been deleted, etcetera. So we require, at least in Amazon, that there is always a human in that decision making loop. The generative AI can propose solutions to things.

It can create scenarios. It can generate tooling and outputs. But there's got to be a human saying, yep, I agree. This is correct. These are the right next steps.

Let's go execute them. You're alluding to this and everything you're saying here, but I'm wondering, are there any specific examples of where AI, and generative AI in particular, have made a real difference for your team, the Amazon security team? Sure. And actually an easy, and I put this in air quotes example is plain language summarization of very complex events. If you think about the security job that I've got here, a lot of it is taking little tiny pieces of technical data and forming them into a story about what's going on.

Creating that story and then taking that information and conveying it to business owners is something that every security professional has to do. And it is arguably one of the hardest parts of our job is taking something that's incredibly complex, technical and nuanced, and putting it in a language that makes sense to a chief financial officer or a chief executive officer. Generative AI is actually turning out to be very useful in that space. In summarizing events in plain English, how. Are you seeing the accuracy play out in that regard?

And what kinds of reviews do you have to do to make sure that even perhaps an informal, not very consequential summary of an event gets conveyed accurately internally? I will say that the models and we use, because of Amazon bedrock, our team internally can actually use all of the models that bedrock has. So you got everything from the soup to nuts and the various different models we support there. But the thing is that there is differing accuracy with the different models in the different applications. And we've noticed things like the missing the word not in a summary, which materially changes the result of what you're looking at.

So we do have to do some cleanup behind things. And I think there have been some very interesting scholarly papers on the accuracy of LLMs right now that point out that there are situations where, and the term of art in the industry is the model hallucinates. It makes up things that aren't there. And when that sneaks into a substantially important security document, that can be a real problem. What about code review and the whole idea of having AI check human work and the things that developers might be doing inside Amazon now, that one is.

Something that works actually really, really well. So if you think about the way that the generative AI coding tools can help developers scan code for hard to find vulnerabilities and suggest more secure options, that's a really directly applicable benefit of using generative AI. For example, Amazon codewhisperer is the only AI coding companion with built in security scanning for finding and suggesting remediations for hard to detect vulnerabilities. The thing there that's most interesting to developers is not the finding part, because there have been tools that found problems for a long time. It's these suggested remediations that's really interesting because a developer doesn't want to be told, hey, you just wrote junk.

They want to be told, here's the thing that can fix this error. Do you want to accept it? Again, there's that human in the loop. The developer has to say, yes, I want to accept that particular thing. Now every developer using codewhisperer can receive AI powered code suggestions tailored to their application code, which allows them to remediate security and general code quality issues too, more quickly and accept fixes with confidence because we've got the model training behind it to say we get this huge corpus of code at Amazon that we trained all these models on, so that you get a really, really broad base to help say this is the right way to write code versus this is a way that's going to produce a problem.

So those are some of the ways that Amazon internally is using generative AI. And I know that relates in part to your team and internal use, but a lot of your role too is communicating with external customers about how to adopt generative AI. I want to talk about that when we come back. You're listening to Geekwire and we will be right back. I wanted a career in it, but I didn't know where to start.

Announcer
WGU makes it simple. Their accredited online degree program cover all kinds of iT specialties, and they have valuable industry certifications built in at no extra cost. The payoff? Having those certs back up my degree makes me look even better to future employers. A nonprofit university that includes top industry certs in their programs.

I choose WGU. Learn more at wgu.edu itsertsincluded.

Steve Schmidt
Welcome back. It's Todd Bishop. This week our guest is Steve Schmidt, Amazon's chief security officer. So Steve, it's interesting to hear these scenarios where Amazon's using this internally. I was going back and watching your AWS reinvent keynote from last fall, and one of the things that you were doing there clearly was communicating to Amazon's partners and businesses about how to adopt artificial intelligence themselves.

And I know, of course, Amazon has some unique selling points here in terms of the things that it offers through AWS bedrock and elsewhere. But just big picture. I'm wondering, what should businesses be thinking about security wise when they're thinking about adopting generative AI in their organizations. Todd, you pointed to it, right with the question itself. The number one question we're getting from business customers right now is how to implement AI in their businesses securely.

Businesses are thinking about how security is evolving in our regenerative AI world. I mean, if you think the generative AI world is moving quickly, the security world, oh my gosh, we got to figure out how to keep up with this crazy pace. And so businesses are being careful about adopting this emerging technology, which I think is smart. When a business has identified an area where AI will help them achieve their goals. We found that there are three big questions they should be asking to help enable generative AI adoption.

And notice I said enable. I think it's super important that security professionals be focused on how we can help, how we can do this safely, not how we can stop things. So the first question to ask is something that seems really simple, but it's deceptive. Where is our data? Business teams are sending data to an LLM for processing, either for training to help build and customize the model, or through queries when they use that model.

How is that data being handled throughout that workflow? How is it secured? Those are critical things to understand. Companies really need to be confident that their data is secured. It remains confidential, and they understand if the model provider will be able to access or use that data for other purposes.

The next big question to ask is, what happens with my query and any associated data? Training data isn't the only sensitive data set you need to be concerned about. When users start to embrace generative AI and LLMs, they quickly learn what makes an effective query. They start adding more details and more specific requirements because that leads to better results. So if your user queries an AI application, is the output from that query and the user's reaction to the results used to train the model further.

What about the file that that user submitted as part of the query? You need to be able to answer the question about how the LLM provider will use that data and if you're comfortable with it. The query itself, by the way, can also be sensitive and should be part of the data protection plan. There is an awful lot that you can infer from a question that a user asks. You mentioned this at part of your reinvent keynote, and it struck me, I guess the idea would be somebody makes a query that's used in the training of the LLM in a subsequent version, and then a competitor, for example, might be able to ask questions about that other company where the user was uploading a file or implying something about a sales decline in a region and somehow figure out something that would give them a competitive advantage.

I wondered, is that merely hypothetical at this point, or are you seeing examples of where competitors and others are able to query LLMs to get that kind of inside information about what another company might be doing based on what the user there put into the LLM? These are real issues that have been demonstrated publicly. There have been a couple papers recently that have been released on retrieving training datasets from large language models that demonstrated that things that go into that training set can be extracted despite the protections that are put in place. It's one of the primary reasons that we chose to exclude customer data from the training of our foundation models. Customers can fine tune their own models in the bedrock service with their data, but we won't use that data that the customer puts into that fine tuning process for our own foundation model.

That way we can guarantee that that customer's data is safe and is protected appropriately. By the way, before I forget the third question. The third thing, yeah, the third question. I think is actually really relevant to your original set of points about using it for security purposes. The last thing that we ask people is, is the output of these models accurate enough?

The quality of the outputs from these models is steadily improving, and security teams can use generative AI as one of a group of tools to address challenges. And from the security perspective, it's really the use case that defines the relative risk. If you're using LLMs to generate custom code, is that code well written, and does that code follow your best practices as an example? Big picture, what kinds of adoptions are you seeing among the customers that you're talking to? Is it beyond the testing phase to the widespread deployment phase?

At this point, where are most of the customers, if you could speak in broad terms? So most customers have one or two flagship applications that are actually using generative AI right now, and they're using that as a way to demonstrate the utility and the practicality of the systems. There's an enormous amount of experimentation that's going on, but like in most things development related, you want to prove that something's actually beneficial. So you take one particular use case. The best use case for a lot of people who interact with customers quite a bit tends to be customer service.

Can their generative AI process assist in getting better customer service delivery by assessing large volumes of data about customer interactions and saying, here is a common pattern of problem or error, and the solutions to it. So it can be used in two ways. It can either power chatbots that customers interact with themselves directly, or it can prompt customer service agents who are dealing with problems so that they can answer a problem a customer has more completely and more quickly. I'm talking this week with Steve Schmidt, Amazon's chief security officer, and he has an interesting background himself. And I want to ask about that when we come back.

You're listening to Geekwire, and we'll be right back.

Welcome back. It's Todd Bishop. I'm talking with Steve Schmidt. He is Amazon's chief security officer. So, Steve, you had two interesting experiences in the past, and actually one that is still part of your life that I wanted to ask you a little bit about and get a sense for how each of these have informed your approach to security and technology.

First off, you were for many years with the FBI as a senior executive in various aspects of cyber security and technology. How did that inform your worldview, and how does that impact your work and your perspective on what you do at Amazon today? Sure. The thing that I took the most out of my experience at the FBI was a focus on the people behind adverse actions. A lot of my career, for example, I was focused on russian and chinese counterintelligence.

And if you look at the motivators for espionage in the sort of the classic world, they're exactly the same thing that are motivators for hackers right now. It's money, ideology, coercion, or ego. So a lot of the same behavioral traits, the same analysis processes, et cetera, apply in the world that I'm dealing with now compared to what I had previously. It's funny because you talk about those traits and those motivations. Give me the four again.

Money, money, ideology, coercion, or ego. And so when you break those down, money's pretty straightforward. Where's the money? I want it. That's your classic ransomware actor.

It's the person who's trying to break into a bitcoin exchange, that sort of thing. Ideology is the nation state motivated hackers. It's the I'm doing this on behalf of pick the nation out there who's got a beef with somebody else. Coercion isn't as much of a thing anymore. This really was something that focused a lot on exposing people who maybe had a different lifestyle previously, whether, you know, if you're gay or whatever.

It was something that had a lot of leverage in the forties, fifties, sixties, seventies, and thank goodness, doesn't anymore. But ego, of course, is universal. It's I want to prove how cool I am. And that applies whether you're in the espionage world. You know, if you look at people like Aldrich Ames or Robert Hansen.

Robert Hanson was the FBI agent who was spying for the Russians for 30 years. Why did he do it? It wasn't money. He got a few million dollars in diamonds that he never spent. It was because he wanted to prove he was smarter than everybody else.

How often do we run into that in the hacker world? Well, and my point was going to be how often do we run into that in the broader world, with perhaps the exception of explicit coercion, to your point, I think maybe people would replace that with influence versus coercion. But it's really interesting to think about the people you're interacting with in terms of their motivations. And it can almost be in a more constructive way, too, just so you know what they're trying to do, how you can address what they're trying to do and potentially help them. So I like those four.

If I could replace coercion with influence. Yeah, that makes sense. The other one that really stood out to me on your bio was, you are to this day a volunteer firefighter. I'd be curious to find out a little bit more about how you became one and how you fit that into your schedule, because I know it's intensive just from the volunteer firefighters that I know, and also whether you've taken lessons from that experience or insights that inform your work and life. So I believe really, really strongly that people need to have something that gives them personal satisfaction.

And my job is certainly satisfying. But there are certain challenges with the job I have. I'm in a relatively senior position, which means that a lot of the work that I do now doesn't come to fruition for years. It's about strategy in the future and about what are we going to do down the road and that sort of thing. There are also challenges with simple, normal human psychological need.

If you think about us as people, we crave feedback. We want to see that we're successful. We want to see that what we do matters. And in the computer world, a lot of what we're dealing with is virtual. So it's really hard to see the result of your action.

It's also really hard to see an individual impact in an area where you're looking at like, I am hundreds of millions of machines flip that on its head. Being a volunteer firefighter and advanced emergency medical technician means that if I do my job well, an individual human being who I can see and touch has a better day. And I get that real human feedback that isn't available from a computer. And that's incredibly satisfying. As a person, it's.

I know I am personally bringing value to this. I am helping that person in a situation which may have been the worst day of their lives, and we're going to make it better. So it's about being a person. It's about being a human being. That's great.

And it really gets back in part to what you were saying earlier. I realize this is more about the state of the technology, but your point about the fact that there still needs to be humanity in the process of developing secure systems and you can't just turn it over entirely to the machines. Absolutely. So one of the key trends in technology writ large, but in security in particular, is the shortage of engineers and the fact that it's hard to fill. And in fact, so far has proven impossible to fill all of the open jobs that are out there in the security world.

Do you see AI playing a role here? As we talk about this interaction between humans and AI? Is there a potential for automation in AI to help fill that gap? Absolutely. So, like I talked about it earlier in our discussion today, a lot of what we're seeing AI be successful at is reducing the more mundane workloads that our security engineers have to deal with every day.

We are in a situation where we cannot hire enough people with the right skills. They simply don't exist. Amazon's a wonderful place to work. It is a place where we get lots and lots of qualified candidates for positions, but we still can't fill them fast enough. And it's an area, I think, where the influence of AI will be best felt by our ability to train people with additional skills more rapidly, to relieve them of undifferentiated heavy lifting, and to retain them longer term, because it means they will have a more satisfying career doing less really tough junk work, basically.

And they're being a place which is cutting edge. I mean, our tooling is really, really cool compared to what you can get out there on the streets. And more importantly, there is no place like Amazon in terms of scale. Simply none. So if youre an engineer who really wants to work on the cutting edge on incredibly large scale distributed systems, you cant beat it.

Preston, just getting to this big picture question as well. I can imagine a lot of people listening to this might be wondering what all of this means for them as individual users of a lot of these systems, whether on their computer or their phone or perhaps even developers working on cloud platforms. When you look at the next three, five years or whatever timeframe, you want to say, where does this shake out in terms of that arms race I was talking about earlier? Who has the advantage in the era of AI when it comes to security? The security engineers and the people on the defense or the attackers, the ones who would be more malicious about what they're doing?

I think anybody who's in our business who thinks that they can get into a position of superiority and stay there as a static place is going to be really, really unhappy. At the end of the day, they're not going to succeed. So it's pretty clear from actually you use the right term, their arms race. We will always be in a position where we have to improve. We will always be in a position where the adversaries are improving their game.

And personally, I love being in that kind of situation. It's actually one of the interview questions that I use with candidates who come in and talk to us is, what do you like doing during a day? What I'm really aiming at there with that particular question is, is this a person who loves doing the same thing every day? They like knowing exactly what their day is going to entail, and it's very predictable. Or, like me, do you love situations where you've no idea what's going to happen?

And if you look at my extracurricular activities, there is nothing that is by definition more unpredictable than 911. It's, we have no idea what's going to happen. I love that personally. And the most successful engineers in security organizations, whether it's at Amazon or other places, are people who feel the same way. They love the differences and they embrace that and see that as an opportunity.

And the cool thing is that means constantly developing new tools, constantly developing new techniques, constantly learning about our adversaries and how they work and what we need to do to beat them. Steve Schmidt, thank you very much for speaking with me. My pleasure. TODD thanks. Steve Schmidt is Amazon's chief security officer.

Todd Bishop
See the show notes for related links, including video from the AWS reinforced security conference this week. Meanwhile, in Washington, DC, Microsoft president Brad Smith faced a grilling over the Redmond companys high profile cybersecurity challenges. They include an intrusion last year in which a chinese hacking group compromised the Microsoft Exchange online mailboxes of more than 500 people and 22 organizations worldwide, including senior us government officials. Testifying this week, Smith said that the company accepts responsibility for all of the technical and cultural problems cited in a us cybersafety review board. Report on the attack.

He told the legislators that Microsoft is committed to the security reforms that it's announced over the past year. Here's one extended exchange between Smith and Representative Clay Higgins of Louisiana. It's a good example of how the hearing went, with Smith repeatedly seeking to defuse the often folksy but ultimately very pointed lines of questioning from lawmakers by acknowledging that they were right. Microsoft is a great company. Everybody in here has some kind of interaction with Microsoft.

Representative Clay Higgins
We really don't have much choice. It's critical that this committee gets this right. Quite frankly, the american people, myself included, we have some issues with what has happened and how it happened and what has transpired since. And yet there's no plan b, really we have to address with you. That's what that means.

Sometimes life comes down. My dad used to say, there's always one guy. It's always one guy. And today, congratulations. I'm the guy.

You're the one guy. I get it. So I have a couple of difficult questions and I apologize for any discomfort because I am a gentleman, but again, you're the guy. Why did Microsoft not update its blog post after the hack they call, it's very fancy here, America called an intrusion. But after the hack, the 2023 Microsoft online Exchange intrusion, why did it take six months for Microsoft to update the means by which most Americans would sort of be made aware of such a hack?

Brad Smith
Well, first of all, I appreciate the question. It's one that I asked our team when I read the CSRB report. It's the part of the report that surprised me the most. You know, we had five versions of that blog, the original, and then four updates. And we do a lot of updates of these reports.

And when I asked the team, they said the specific thing that had changed, namely a theory, a hypothesis about the cause of the intrusion, changed over time, but it didn't change in a way that would give anyone useful or actionable information that they could apply. Okay, so you see, Mister Smith, respectfully, that answer is, does not encourage trust. And regular Americans listening are going to have to, are going to have to move the tape back on the Microsoft instrument and listen to what you said again, but you didn't do it. I mean your Microsoft, yet a major thing happen and your, and the means by which you communicate with your customers was not updated for six months. So.

Representative Clay Higgins
So I'm just gonna say I don't really accept that answer. Could I just as thoroughly honest. But I need to move on. Look at another question. I said the same thing and we had the same conversation inside the company.

Okay. As for that reference that representative Higgins made to plan B or alternatives to Microsoft software, Smith also talked about competition and collaboration across the industry. It's an entire ecosystem that we're seeking to defend, and nobody can do it by themselves. And I think fundamentally, just as the CSRB's words were well taken by us, we needed to focus on our culture. I think we have a collective culture, and it's a collective culture that we need to work on by inspiring more collaboration, not just with the government, but frankly, across our industry, so that, you know, people can compete.

Brad Smith
Somebody said there's no plan B. I think about two thirds of the folks who are sitting behind me in this room are trying to sell plan B to you in one way or another, and that's okay. But there's a higher quality here as well. And I like to say, you know, the truth is, when shots are being fired, people end up being hit and they take their turn being the patient in the back of the ambulance. Everybody else, you're either going to be an ambulance driver or you're going to be an ambulance chaser.

Let's be ambulance drivers together. See the show notes for links to the video of this hearing and coverage on Geekwire. Thanks for listening. Kurt Milton edited this episode. I'm Geekwire co founder Todd Bishop.

Todd Bishop
We'll be back next week with a new episode of the Geekwire podcast.