Primary Topic
This episode explores the ethical implications of using AI in political strategies, particularly focusing on the balance between technological advancement and maintaining democratic integrity.
Episode Summary
Main Takeaways
- AI's impact on democracy is profound, influencing voter behavior and public opinion through data manipulation.
- Ethical considerations are crucial in the deployment of AI in political arenas to prevent moral injuries and erosion of public trust.
- AI has the potential to either exacerbate or mitigate political misinformation and manipulation, depending on its regulatory framework.
- The discussion highlights the need for transparency and accountability in the use of AI technologies in political strategies.
- There's an ongoing debate about the role of AI in supporting or undermining democratic values and processes.
Episode Chapters
1: Introduction to AI in Politics
Exploring the historical context of AI's use in political campaigns, focusing on ethical concerns and societal impact. Kesha McKenzie: "Remember Cambridge Analytica? That scandal illustrated how data and AI can play controversial roles in elections."
2: The Ethical Dilemma
Discussion on the ethical implications of using AI for political gain versus its potential benefits in democratic engagement. Bruce Schneier: "AI in politics is not just about winning; it's about how we win and what ethical lines we're willing to cross."
3: AI's Dual Potential
Analyzing AI's capability to either disrupt or support democratic processes through examples of its application in political strategies. Ananda Barclay: "AI could either be a tool for greater engagement or a weapon of misinformation, depending on its design and control."
4: The Role of Regulation
Debate on the necessary regulations and controls to ensure AI's positive impact on democracy. Bruce Schneier: "Without strict regulations, AI could be misused to manipulate elections and public opinion on a massive scale."
5: Concluding Thoughts
Summarizing the discussion and reflecting on the balance between leveraging AI for good while avoiding its pitfalls in political contexts. Kesha McKenzie: "We must be vigilant about how AI is implemented in political contexts to ensure it supports rather than undermines democracy."
Actionable Advice
- Stay informed about how AI is used in political campaigns to understand its influence on public opinion.
- Advocate for transparent AI use in politics to ensure it adheres to ethical standards.
- Support regulations that prevent misuse of AI in political manipulation.
- Engage in discussions about the ethical use of AI in politics to raise awareness.
- Encourage political accountability to ensure that AI technologies are used to enhance democratic processes.
About This Episode
People
Bruce Schneier, Kesha McKenzie, Ananda Barclay
Companies
Leave blank if none.
Books
Leave blank if none.
Guest Name(s):
Bruce Schneier
Content Warnings:
None
Transcript
Kesha McKenzie
Hi, Kesha. Hey, Ananda. Do you remember the t, the scandal that was Cambridge Analytica two us presidential election cycles ago? I remember that name, and I remember lots of drama on social media, but tell me about it. So, long story short, Cambridge Analytica was a political consulting firm known for its use of data analytics and targeted advertising and political campaigns.
Ananda Barclay
The company gained notoriety for its role in the 2016 us presidential election and the Brexit referendum, particularly for its controversial methods of harvesting and utilizing personal data from Facebook users without their consent. This data was then analyzed and used to create detailed psychological profiles to deliver hyper targeted political advertisements in the US 2016 presidential election and the Brexit referendum. These tactics were employed to influence voter behavior and public opinion. Listen back to this epic interview with whistleblower Christopher Wiley, a data analyst formerly employed by Cambridge Analytica, interviewed by Carol Catawallader of the Guardian. Throughout history, you have examples of grossly unethical experiments.
Christopher Wiley
And is that what this was? I think that, you know, yes, it was a grossly unethical experiment because you are playing with an entire country, the psychology of an entire country, without their consent or awareness. And not only are you, like, playing with the psychology of an entire nation, you're playing with the psychology of an entire nation in the context of the democratic process. What does that bring up for you? Okay, so I think I was still using Facebook back then, but, like, there was a whole scandal about people feeling hyper exposed.
Kesha McKenzie
They were doing these free quizzes, Farmville or whatever, and surveys, and they're like, who's taking the survey data? Who are they then selling it to? And why can't I just do a free quiz on my social media and be left alone? So, I know a lot of people who got up off of Facebook that year. It was already changing, like, the atmosphere of the site.
It was less personal and more agitated and aggressive. It did start to sour people on the whole social media thing. The casual connecting Internet that we were promised, I think that was the decade we lost it. That makes sense. 2016 was definitely the year that I got off of Facebook and social media.
Ananda Barclay
And this year is the year that I'm back because of this podcast and other, like, now. It's like, can't run a business or do anything without it. It's more of a utility for me now. Yeah. Yeah.
Kesha McKenzie
And I feel like 2016, at least in the UK, was the year of Brexit. Britain leaving the EU by referendum and everything up to that pulled on the same sorts of strings as the Cambridge Analytica scandal. So lots of behind the scenes data manipulation, changing what people saw, reading people's, or trying to read people's emotions. So, yeah. Big season.
Ananda Barclay
Big season, indeed. It's never all private. No. Yeah, no. Cambridge Analytica speaks to the high stakes possibility of collective moral injury, regardless of who's in office and from what political party.
The use of technology when it comes to political strategy is not a political question, but an ethical one. When public ethic is violated, it's likely that moral injury follows. Moral injury occurs because political violations betray our sense of trust in government, directly impacting our lives with outcomes based on decisions that violate our moral ethical consent. Yeah. With the scale at which government can act on people, it doesn't really matter whether you're directly affected.
Kesha McKenzie
Like, if you don't use a particular service or if you didn't submit particular data, it's just the overall climate around you becomes corroded in some way. So you have a stake. And so even if you're not the person who's going to get a check because there's a class action suit, but the whole soup that you're in, like, we all get to be part of that. Yeah. So if we take the class action, Keisha.
Ananda Barclay
Like, all right, so I'm not going to get a check because some company did me dirty with mesothelioma, right? I don't have mesothelioma, but why should I care that a class action lawsuit is happening for folks with mesothelioma? Like, how does that impact me? The hope is that the action for that particular company that did something wrong sends a warning shot to all the companies who think that they could get away with a similar sort of effect. And so you hope that the specific remedy to the people directly impacted changes the climate around them?
Kesha McKenzie
Kind of like what you were saying last episode, that it's the people who are directly impacted who shape what a moral repair looks like. The purpose of that moral repair is to change it so that the injury doesn't happen ever again. And so when it comes to political issues, people who can vote, they have a direct influence on the people who are, at least in theory. Let me reframe that. I mean, no, no.
Ananda Barclay
We have to say we have an impact. Otherwise, there's nothing left. Keisha, there's nothing left in this dumpster fire. That is a United States. Okay, rewind.
Kesha McKenzie
I need to believe that my vote. Can light a fire under multiple people's behinds who serve the public. Yeah, because that's one of the levers we have, and we have to pull all the levers. Yes. So the people who can vote have.
A particular influence in the system. But even people who can't vote, the non immigrants, the student workers, the people who are on short term labor contracts, are still affected by the outcomes of the system that the voters represent and influence. So I think about these sorts of processes in a wider circle than just who's directly in the target. The person who might have their information released is one person. Yes, but all of us are implicated because of what can potentially happen to them, can happen to us, and often does.
Bruce Schneier
Got it. So you're talking about accountability and deterrence. Yeah, accountability, deterrence, and then being in solidarity because we're all connected. So, like, thinking about how we pay attention to the well being of everybody is basically what politics is all about. Hear, hear.
Ananda Barclay
I mean, that is the idea of a democracy. Rumor has it, I've heard legends and myths.
The goal of any political strategy is to win. And so the questions understandably surface. In such a massive global election year, what role does AI play in the political strategy of winning? Where should we focus our concerns? How do we manage our anxieties?
The big question of this episode is, what is an ethical win for the public when it comes to AI and political strategy?
I'm Ananda Barclay. I'm Kesha McKenzie, and this is moral. A black exploration of tech, a show where we explore the social and moral impacts of tech and share wisdom from africana culture for caring and repairing what tech has broken. When we come back, we'll talk with Bruce Schneier, a fellow and lecturer in public policy at the Harvard Kennedy School and Bertman Klein Centre for Internet and Society.
F
Hi, I'm Dacre Keltner, a psychologist at UC Berkeley and host of the Science of Happiness podcast. On each episode, a guest tries a practice that's been documented by research to improve our well being. It might be finding a new way to connect more deeply with others, learning how to ride the waves of anxiety and tons of other interesting and sometimes unexpected strategies. Then we learn the science behind why these strategies work. Join me on the Science of Happiness podcast.
Kesha McKenzie
Busch Schneier calls himself a public interest technologist, and he works at the intersection of security, technology, and people. In your last essay on your website, you talk about how the frontier became the slogan of uncontrolled AI. Say more about that for our listeners. What is the frontier? What are you referring to with that?
Bruce Schneier
Frontier is what we talk about when we talk about the newest AI models. So the term you will hear by the researchers by the VC's, by the companies are frontier models. These are models on the edge. These are models that are doing the best. These are the ones that cost hundreds of millions of dollars to create and a lot of energy.
And it's a complicated metaphor. Frontier is the final frontier. Space, Star Trek, it didn't come from the american west and subjugating the native population, which is the legacy of the frontier in this country. And when you peel back the edges of AI, you see a lot of frontier thinking, you see a lot of colonialization. A lot of the data is there for the taking.
No one has it. We're going to take it. There's a lot of rule breaking. The frontier was all about the rules don't apply and we're going to make our own rules. Think of the american west and the cowboys.
That metaphor. So it is complicated, and I worry that we are making some mistakes when we think about AI. In our rush to create AI, what. Would we need to do either with AI or something else to avert the worst impacts of degraded quality in public. Debate right now, AI is not helping.
Right now. There is so much AI nonsense being created by sites that realize they can fire the humans and have the AI write the stuff. And this isn't new. And we have seen AI generated content for years in three areas, in sports, in finance, and in fashion. Those are three areas where it was kind of formulaic, it was stylistic.
It didn't take a lot of creativity to write the article. And AI has been writing those articles for a while now. But now they're writing more stuff and it's not very good. And the AI dreck is pushing out the quality human stuff. Now, this is hard.
I mean, some of it is our fault. We as media consumers don't demand quality in our reporting, in what we consume, so we accept poor quality things. I think AI has the potential to write very nuanced articles about issues that matter. I think about local politics again, right? We've had the death of local journalism because of platforms like Facebook.
But now lots of public government meetings at the local level are effectively secret because nobody's there. There is no local reporter covering the school board or the city council in a town. Now, AI can fix that. AI is actually really good at summarizing stuff. So if the meeting is open and it's recorded, the AI can summarize it and write the article.
So I'd rather have a human doing this, but the problem is I don't have a human. We can't afford the human we're not willing to pay the human, let's say it that way. Of course we can afford it. And in that area, an AI is way better than nothing. It'll provide some accountability for local government.
Ananda Barclay
What would seem most urgent, if there. Is a sense of urgency, urgent is this election. On a scale of one to ten, I'm pretty concerned about democracy in the United States. I see some very strong authoritarian tendencies. I see a lot of anti democratic ways of thinking.
Bruce Schneier
I see a lot of people more concerned with results than the process. Now, the annoying thing about democracy is you got to accept it, even if the vote doesn't go your way. Democracy is really about not picking the winner, but convincing the loser to be okay with having lost. And that sometimes means your side loses. And if you're not okay with that, then you're not really for democracy.
See a lot of people in this country not okay with that. That's think the ends justify the means. And I don't just mean their side, I mean us too, a little bit. But I do see some very strong anti democratic ways of thinking, and that gives me pause. Turns out a lot of democracy is based on the way we do things rather than laws.
Hey, it's sort of just the way we do it. And it turns out if someone just breaks all the norms, there's not a lot that can stop them. We learned that, and that was a surprise to many of us. So I don't know. I like to think we are resilient.
We were very resilient in 2020. We've been resilient since then. Despite the rhetoric overall. Still a lot of places where democracy is not working. We recently spoke to Professor Alondra Nelson, formerly deputy at the office of Science and Technology Policy, and she said that over a third of the planet is expected to have elections this year.
Kesha McKenzie
And you said you thought democracy globally is kind of under siege. Can you share what impact you think AI might have on some of these elections and how we think about government by the people globally? It's interesting statistic, and I've heard 30, a third, a third. I've heard 40%, I've heard over half. Kind of depends how you count, but it is United States and the EU, and it's India.
Bruce Schneier
Australia will be early next year, UK early next year. These are large countries with strong democratic traditions, and we are worried about AI and its ability to create fakes. This is a very near term AI risk. This is not AI subjugating humanity. This is not AI taking our jobs.
This is AI making better memes. Now, that's a thing that the fear is that AI will be used to create fake audios and fake videos. There was a fake audio in Slovakia's election a couple of months ago that came out a week before the actual vote. That might have had an effect. We don't know.
So we worry that what the Russians did in 2016 with the Internet Research Agency and creating fake ads on Facebook and fake means on Twitter can be replicated at speed, at scale, at a much lower cost.
Ananda Barclay
What Bruce is referring to is the russian government's interference in the US 2016 election through a coordinated campaign that included hacking democratic party emails and disseminating misinformation across social media platforms. This interference aimed to sow discord, undermine public trust in the democratic process, and skew public opinion in favor of certain political outcomes. I read an article that you wrote for the Ash center back in November about the four dimensions of change. You said speed, scale, scope, and sophistication in terms of how AI might develop. Can you give us some concrete examples of what you mean by scale versus scope so people might understand?
Kesha McKenzie
Yeah. So let's use. Let's take misinformation. Like why misinformation might be worse. Speed is one reason AI's can make memes and fake facts and write tweets and Facebook posts so much faster than humans can.
Bruce Schneier
They operate at a speed that will unrival humans. They can do it at scale. They can make not just one post, but thousands, millions. You can imagine millions of fake accounts, each tweeting once, instead of one account tweeting a million times. The scale scope is how broad it is on Facebook, on Twitter, in different languages, optimized for different audiences.
And then the sophistication, they might be able to figure out more sophisticated propaganda strategies than humans can. And to me, that's what I look at when I look at these technologies, and specifically when those changes in degree make changes in kind, when it is different, that it's not just faster, but it's something else. So it's not just the russian government with 100 plus people in a building in St. Petersburg, it's everybody. And the worry is that noise will drown out the signal.
Now, it's not just AI. Blaming that in AI, I think, is too easy by half. That's a lot of us that value the horse race and who's ahead on the polls. We're having this interview the day after President Biden said the union address. And everything I'm reading is how good it was, how bad it was.
Did he make gaffes? Did he sound good? Not a lot about substance, and that's on us. That is the kind of political reporting we like. And that kind of reporting plays in to memes, which plays into AI strength.
So the worry is that AI will make fakes, the fakes will be disseminated. And more importantly, there's something called the liar's dividend, where if there's so many fakes out there, if something real happens, you can claim it's a fake. And we have seen this in the United States. Trump has said about some audio that was probably a deep fake when it actually wasn't. But if things are a deep fake, so it.
I don't know if that will have an effect. We don't need AI to create memes to denigrate the other side. There's a video of Nancy Pelosi that was slowed down, made her look drunk. That wasn't a deep fake, that was slowing down. A video, like any twelve year old can do that with tools on their phone, but it was something that was passed around.
So I don't think the deepfakes are going to be that big a thing. I do worry about them close to the election when they can't be debunked, and I worry about that ability to claim something real is fake, because there are so many fakes out there. What ways do you see the average voter being able to not defend themselves, but keep their guard up? A sense of wisdom around how to navigate how technology is being used in the political process. It's hard.
I mean, not sure I know an average voter. You might not either. The people we know are hyper aware, maybe hyper political. I mean, are not. When I read about the average voter that it's not the kind of people I meet at Harvard, which is very elite.
The average voter, near as I can tell, doesn't get a lot of information. And not only telling them how to be on their guard doesn't help. They don't know they have to be on their guard. They probably don't even know what being on their guard means. It's sort of interesting, I think, about the idea that we're going to put labels on memes, whether they're true or not.
To me, that doesn't make a lot of sense, because people who would believe the labels don't need the labels. The people who need the labels aren't going to believe them if they're there. And I don't want to live in a world where the average voter needs to be on their guard. That seems like a really not fun place. I want a world where the average voter is safe, where the average voter gets information that they can use to make an intelligent decision about which candidate best represents their views accurately and cast their vote without any undue process or intimidation or long lines or anything.
That's the world I want. That you don't have to be a political junkie in order to be. Have a political voice, that you could be someone who is just living their life in a democracy, and then you get your ability to have your say. So I don't know. I don't know what to say to the average voter, I guess.
I'm sorry. I want it to be better. Bruce, you've written about how AI can shape campaign advertising, communicate with voters, write legislation, distribute propaganda, submit comments. Can you talk about how AI is showing up in the practice of democracy and campaigns, law making and regulation? Depends how we do it right now, we've had examples where there have been rulemaking processes and hundreds of thousands, millions of comments were submitted by not an AI, but by a machine, by a computer.
It wasn't even clever. It was just multiple comments spitted by fake people. So already it's pretty bad. AI as assistive tech, if we do it right, increases democracy. So if an AI can make you more articulate to your congressperson, I think that's great.
If the AI denies you a voice, that's bad. I think about AI as being used to ease administrative burdens, like helping people fill out government forms. Now we can do that, and that is possible. And that will engage people in democracy. That will help them get the assistance they are legally entitled to.
You could also imagine AI is being used to increase the division between the haves and the have nots. This is the problem with tech, right? The old saying, tech is neither good nor bad, nor is it neutral. We can design tech to favor democracy. We can design tech to favor the powerful.
And these AI models right now, they are very expensive to create, relatively free to use. If you're not going to use the best model, the tech monopolies are right now giving them away. But my guess is that's temporary, and they will become cheaper, and they will become more available, and that people will have personal AI assistance. Now that can be an incredible boon for equality, to give you an advocate where you couldn't have had one before. And here I'm thinking about people for whom going to a courthouse or a government building is a burden.
They have a job. They have a family. It's not easy to get around. It's always easy to say, go do this if you're a middle class white guy. And it's harder the more you deviate from that zero level of ease.
AI can make this better. It might not. It really depends. But we do have an opportunity here. So I like talking about the benefits of AI for democracy because that gives us a chance for having it.
Democracy is an aspiration. It's always an aspiration. And our goal is to make it better. After the break, we'll hear more about what Bruce has to say about making AI a tool for people to have hope, ease, and access to the democratic process. This is moral repair, a black exploration of time.
G
I'm Darrin. And I'm Esther. And this is Second Sunday, a podcast about black queer folk finding, keeping, and sometimes losing faith. This season's full of candid conversations. We're talking to theologians, artists, activists, and community members living at the intersections of faith, spirituality, and identity.
The saints ain't ready for this, but we're still gonna talk about it. Second Sunday. Find it wherever you get podcasts. Second Sunday is a Cube original podcast and is part of the PRX big questions project.
Ananda Barclay
And we're back talking with the cybersecurity expert for the people, Bruce Schneier, to hear what he has to say about AI and the democratic process.
Bruce Schneier
Excites and. Or terrifies you about technology. What big questions do you still have about the future of tech in politics? I think a lot about power. When I think about tech, the element that to me is most important and exciting and disturbing is how it relates to power and whether power can co opt it, whether it can be used to dismantle power and disaggregate power and distribute power, or whether it will consolidate power.
And we've been through these cycles. When Facebook first appears, we all thought it would be democratizing, that it would give people voices, didn't have voices, and it did for a while. And now it helps the powerful consolidate their power. And what comes next will first be democratizing, and then someone will figure out how to consolidate. I think AI is going to be that way.
I think about all the ways AI will democratize, will distribute power, will give power to the powerless. But the powerful will try to figure out how to centralize that power again. That's what power does. So that is what I think about most. I really think about how these technologies interact with power, how power can use them, how they can be used against power.
But that I don't mean revolutionary things like WhatsApp, breaking the monopoly on phone message charges, right. That kind of thing. Sort of very low level. The way these technologies can make it so people can organize and find themselves and find their tribes, which has been one of the wonders of the Internet. You can find people like you no matter how weird you are.
And that's wonderful. Yes. On the other hand, conspiracy theorists find each other and now suddenly it's their reality. It's always the good with the bad. How can AI be used to support the democratic process?
Let's see things I've seen. There is a public charity that has created an AI to help people run for local office. Now these are nonpartisan offices. These are sheriff or person in charge of sanitation or dog catcher, very local government jobs that are not party affiliated. And this AI helps regular people fill out the paperwork, have a website, get the signatures, do all the things necessary to run.
That seems fantastic. That will increase engagement in local government around the country. That feels like a great thing. I've seen AI's that are helping people navigate the legal process. There's one site called donotpay.com which will help you get out of parking tickets that you don't deserve to pay.
Again, very egalitarian, can imagine something that will help people in housing court or immigration court. All of these legal processes that have a lot of paperwork, a lot of bureaucracy, and you have to pay multi hundred dollar an hour lawyer to help you navigate these processes. And if you can't afford that, you don't get a lot of representation. So that seems really important and really powerful. There are groups that are using AI to help engage voters to get beyond the parties with their big party databases of voters that they control access to as a way to limit primary challenges that feels more democratic.
These are all things like very powerful things, I think, about AI assisting in polling, in divining campaign strategies. Here again, we can imagine haves and have nots, but we can imagine people who are not in the normal party machine being able to use them to run for office. These all seem like really good things. And some of this is science fiction, right? This isn't all reality, but none of it is stupid science fiction.
It's science fiction that in a couple of years is likely to be true. My question is, how do we build an AI that isn't just going to be beholden to corporate interests that can do some of these good things for society, for democracy, and not just do what's good for the Silicon Valley tech billionaires?
Ananda Barclay
So the question is what wisdom from africana culture do we have as it relates to democracy and tech? I wonder about the theme of participation. I was at a concert recently, black gospel music mashed up with a local symphony, and I knew going in the minute they said, this gospel artist is coming, I knew it was going to be church folks there a completely different demographic than what's usually going to participate in this cultural space. Yeah. The usual rules about waiting until the end to applaud or not singing along, that wasn't going to apply because the black cultural space is participatory.
Kesha McKenzie
So the audience is as much a part of the performance as the person on the stage, and the person on the stage welcomes that. It's not like a imposition, it's a conversation between them, and it creates something that neither of them could add on their own. And it was amazing and beautiful and enlivening. And I think that gave me a taste of what governance of the people, by the people, and for the people can be like a richly participatory experience, not something that is gatekeeped by the people who had early access to the stage, and not something that is exclusionary or just like, respecting this performance that's happening, like, at a campaign event, and your only role is to co sign the thing that is happening in the stadium or to cast one vote, one time every two to four years. But, like, what's the quality of democracy in between those stage moments?
And how does even the stage moment welcome all of the richness of the so called audience to create something new? The wisdom of call and response. Yes, exactly. Call and response and a deeply robust inclusion. Mm hmm.
Ananda Barclay
That creates a co facilitation of performance or why we're gathered in the first place. Yes, that would be a democracy.
Kesha McKenzie
It would be. And I think that's part of what makes what could be seen as a threat from AI of crowding out people. I think that's what would mitigate the risk to make sure that the system itself not only brings people in at select points, but, like, really centers the contribution of people, then it doesn't really matter what the machines do or the instrumental ends to which we put the machines, because you're crafting the system and the structure and the policy and process around what humans can do. What comes up for me as it relates to what does africana wisdom have is the importance of rhetoric. In several of Toni Morrison's essays, she speaks about the importance of language and the use of it as a technological tool.
Ananda Barclay
As we're talking about AI and the democratic process. How has our language shifted when we talk about democracy, how do we use it to make or break worlds? We shape world through language. Do we even value that rhetoric and what that rhetoric then creates or instigates or obligates the listener to? Is there an example that you have that's coming to mind?
I think a great example is like book banning, right? We're banning different cultural perspectives. So, for her, the use of rhetoric, and in particular, not the imposition, but the understanding that the arts and the humanities really are alongside technology. And she does this in a speech in 1991, like, everything that's happening today, if you look back, you'd be like, oh, Toni Morrison is a prophet. And it's like, no, she just saw the writing on the wall, and will it continue?
And even in her speech, she says, I've dramatized and exaggerated these things to prove a point. And what I find eerie is that we are actually at the point of her dramatization. What she names as dramatized is actually here. Yeah. And so the work, the technology of the humanities, of engaging with cultures and people of differences, is vitally important because you get a diversified understanding of the human experience that somebody is not you, and that's okay and actually vital that somebody sees the world differently, thinks differently, that it is actually an insult to knowledge, it's an insult to technological innovation, to consider one understanding supreme.
But what does it look like to actually delve into the complexity that we are dealing always already with multiple. Multiple identities, multiple ways of seeing, multiple ways of knowing? And what is the art of then negotiating those multiple ways so that we can be a better society, a better democracy together. And that that is the gift that the arts do. So going to a show, right, as you're talking about movies, literature, art, these things need to happen because they expose humanity as we are and force us to engage in ways that cultivate our own humanity and compassion for others?
So, yeah, the technology of rhetoric and the humanities, from the wisdom of Toni Morrison, is what comes up for me. No one should convince you that listening to authors and concerts and going to exhibits is secondary. Whatever your profession is, this may be the real work you are doing. Creating the juncture where artists, scholars, and the public intervene to create and facilitate genuine intellectual development, to facilitate communities successfully, to alter socially unhealthy situations in traditional and non traditional ways, to nurture splendid consequences of fed up representatives and citizenry who no longer are waiting for the peace dividend or the campaign promises. I wanted to call your attention to how much is needed now from you and how vital it is now, with no more time to lose, that we become as innovative as possible if we don't want to continue to relegate dying and loving and giving and creating to pop bestsellers and greeting cards.
Christopher Wiley
I believe, as you do, that there are distinctions to be made and kept among data, knowledge and wisdom. That there should be some progression among them that data helps feed and nourish knowledge. That knowledge is the staging ground for wisdom, and that wisdom is not merely what works or what succeeds, nor is it a final authority. Whatever it is, it will always be a search.
Ananda Barclay
We're building community with listeners this season, so reach out and talk to us on Instagram. Our handle is oral Repair podcast. Also catch us on x, formally Twitter. We'd love to hear from you. Follow the show on all major audio platforms, Apple, Spotify, Audible, RSS, wherever you like to listen to podcasts.
Kesha McKenzie
And please help others find moral repair by sharing new episodes and leaving us a review. I'm Ananda Barclay. And I'm Keisha McKenzie. The moral repair of black exploration of tech team is Us Ari Daniel, Emmanuel de Sarme, Ry Dorsey, Courtney Florentin and Genevieve Sponsler. The executive producer of PRX Productions is.
Jocelyn Gonzalez, original music is by Jim Cooper, an infomercial USA, and original cover art is by Randy Price. Our two time Ambien nominated podcast is part of the Big Questions project at PRX Productions, which is supported by the John Templeton foundation.
From PRX.