Primary Topic
This episode discusses the integration of ethical, legal, and societal implications (ELSI) into DARPA's technology development projects, emphasizing proactive consideration during the design phase.
Episode Summary
Main Takeaways
- DARPA is institutionalizing ELSI considerations in all its programs to ensure responsible development of technologies.
- The role of an ELSI Scholar has been created to guide and integrate these considerations actively.
- Engaging with diverse expertise is crucial for anticipating the broader impacts of technological innovations.
- ELSI is not just a checklist but a fundamental perspective that enhances program design and application.
- DARPA's approach aims to create technologies that are ethically sound and socially beneficial.
Episode Chapters
1: Introduction to ELSI at DARPA
Overview of DARPA's commitment to integrating ELSI in its technology projects, featuring comments from agency leaders about the importance of these considerations. Tom Shortridge: "DARPA starts an average of 50 new programs each year, all of which now include ELSI considerations."
2: The Role of the ELSI Scholar
Discussion on the new ELSI Scholar role at DARPA and its impact on program development, featuring insights from the first scholar, Dr. Rebecca Krutov. Rebecca Krutov: "I view ELSI as something that can enable programs, not restrict them."
3: Real-World Applications and Implications
Exploration of specific DARPA projects where ELSI considerations have already influenced design and implementation strategies. Rebecca Krutov: "ELSI considerations have led to program adjustments that are more socially responsible and inclusive."
Actionable Advice
- Integrate ethical considerations at the start of any project to guide its development.
- Engage interdisciplinary teams to evaluate potential impacts from multiple perspectives.
- Create roles or positions dedicated to overseeing ethical and societal impacts in project teams.
- Regularly review and update ethical guidelines as projects evolve and new insights are gained.
- Foster an organizational culture that values proactive consideration of potential impacts.
About This Episode
As a global leader in innovation, DARPA starts an average of 50 new programs each year. These programs span a variety of technical disciplines to develop breakthrough technologies for national security, all of which have the potential to raise ethical, legal, and societal implication – or, ELSI – considerations.Taking time to consider ELSI’s role in a program can contribute to the responsible development of emerging technologies by guiding innovation, maximizing the potential application space, and facilitating dialogue with future end-users, and the public, to ensure diverse perspectives and implications are considered. It can improve research by fostering conversations that identify unknowns, anticipate consequences, and make design decisions to maximize benefits and opportunities and minimize risks and harms.In this episode of Voices from DARPA (https://www.darpa.mil/about-us/podcast), we’ll hear from DARPA Director, Dr. Stefanie Tompkins, to explain the agency's perspective on those implications, as well as Dr. Bart Russell, deputy director of the Defense Sciences Office, on what it would mean to incorporate ELSI across the agency more formally. Finally, Dr. Rebecca Crootof, DARPA’s inaugural ELSI Visiting Scholar, will discuss her journey to the agency and her approach to developing a process to ensure that ELSI can inform — and even improve —DARPA programs.
People
Dr. Stephanie Tompkins, Dr. Rebecca Krutov
Companies
DARPA
Books
None
Guest Name(s):
Dr. Rebecca Krutov
Content Warnings:
None
Transcript
Speaker A
Coming to DARPA is like grabbing the nose cone of a rocket and holding on for dear life. DARPA is a place where if you don't invent the Internet, you only get a b. A DARPA program manager quite literally invents tomorrow. Coming to work every day and being humbled by that. DARPA is not one person or one place.
It's a collection of people that are excited about moving technology forward. Hello and welcome to voices from DARPA. I'm your host, Tom Shortridge. As a global leader in innovation, DARPA starts an average of 50 new programs each year. These programs span a variety of technical disciplines to develop breakthrough technologies for national security, all of which have the potential to raise ethical, legal and societal implication, or LC considerations.
Tom Shortridge
LC has an important role contributing to the responsible development of emerging technologies by guiding innovation, maximizing the potential application space, and facilitating dialogue with future end users and the public to ensure all perspectives and implications are considered, DARPA director Doctor Stephanie Tompkins explains the agencys perspective on those implications. We have a philosophy, when we are dealing with new technology, of really trying to build in from the very beginning, thinking about the ethical, the legal, and the societal implications of the technology if we are successful. And we work very closely with ethicists and legal scholars and behavioral scientists to help us project and predict what kinds of things could happen. So that in the course of any technology development, we are also trying to gather the data that policymakers would need to understand in order to inform on how it could be used, what kinds of regulations would need to be put into place. It's not perfect, right?
Stephanie Tompkins
You can never imagine everything, but it is our job to try. Doctor Bart Russell, deputy director of the defense sciences office, worked closely with Doctor Tompkins and others on what it would mean to incorporate Elsie across the agency more formally. She started last year convening a group of folks across the agency who had independently been working on Elsie and Elsie like activities, and said, we need to do this more broadly for the agency. If DARPA is by definition starting programs that inherently are meant to disrupt whatever field that they are in, then they should probably also inherently consider the ethical, legal, and societal implications of that disruption. She wanted to make it more purposeful for every program, even those programs who might think, that's not my problem, this is going to be somebody else's decision somewhere down the line.
Bart Russell
And so we started thinking, what would be the way to make this an organic feature within DARPA? But also greatest sin in DARPA is to slow something down, right? So we needed to make this process both responsive and informed, but also very fast and lean to match the way. Our programs are structured. In early 2023, DARPA put out a call for its first full time, one year visiting Elsie scholar.
Tom Shortridge
As noted in the initial job posting, the Elsi scholar would have an opportunity to affect the structure of DARPA's work in Elsi, provide subject matter expertise to the development of DARPA programs, and conduct independent research while at the agency. That sounds like a lot of responsibility, influence, and potential impact. For some, maybe too good to be true. I had reservations about taking this role because I was concerned about whether or not this role was meant to exist as a fig leaf, right, whether or not this role was meant to be performative or something for public affairs. And I pressed the people who were interviewing me, I pressed them pretty hard about the nature of the role and what would be possible.
Speaker B
And I had persuasive enough answers about folks genuine interest in creating a culture of Elsie here that I felt comfortable taking the role. But I definitely started skeptically.
Tom Shortridge
That's doctor Rebecca Krutov, professor of law at the University of Richmond, who started in January 2024 as DARPA's inaugural visiting Elsie Scholar. Her journey to the agency, though certainly nonlinear, illustrates the diverse knowledge base and varied perspectives needed to tackle the broad implications of disruptive breakthrough technologies. I grew up reading science fiction, and I continue to read and enjoy science fiction and always thought that was just, you know, something I did on the side that was fun and went about my life, went to college, became an english teacher after being an english teacher for a year, realized I was really interested in issues of inequity and started working at a non profit, working on fair housing issues. Working on that got me really interested in what I called the JD glass ceiling because I would work on building, building cases and hand them over to the lawyers, and the lawyers would make decisions based on their expertise that I didn't get to have input on based on mine. Went to law school thinking I was going to be a strategic civil rights litigator, and was introduced to international law and was like, oh man, this, this is fascinating.
Speaker B
This stuff is really, really interesting. And had the opportunity to take this incredible class with Ona Hathaway, where we connected with the CIA, with the DoD, with Human Rights Watch, with a bunch of different organizations, and said, what are the questions you don't have answers to? And we'll try and figure them out and compare answers. And through that class, started writing and working on drones, malicious cyber operations, and then eventually working on autonomous weapons systems. And it wasn't sort of until that moment that I was like, oh, wow, that long time ago.
Interest in science fiction, turns out, is very, very relevant to thinking through where new technologies challenge the law in various ways, and particularly in the military context, where the stakes are so high, where the law is so distinctive and unique. The law of armed conflict has a bunch of specific rules that don't apply in any other legal system. And so ever since, I've just been really interested in the interaction between new military technologies and the law. The Elsie scholar was a new position at the agency. And though the role was defined in broad strokes, Rebecca wasn't brought on board with specific marching orders.
Tom Shortridge
Instead, the why and how of it all was up to her to help define. Part of my information gathering tour of the first month was to get a sense of, okay, theres a lot of leadership whos interested in this. What if the average PM feel about this? What does the average seda feel about this? For reference, a SITA is a scientific, engineering and technical assistance contractor, and that.
Speaker B
Was much more varied. Obviously, I came in and got asked if I was going to be the fun police, right? If I was just here to say no to things. And that is definitely not how I view my role. Right.
I view LC as something that can enable programs, as something that can benefit programs, and also as something that can benefit all the people impacted by the technological developments that are happening here. People had a lot of different assumptions, people had a lot of different questions. But by and large, I have not had conversations with people that are resistant to taking time, to thinking through the implications. So I don't see my role here as coming in and telling anybody what to do. These are the program managers programs.
They are the ones who are going to be responsible for them. They are the ones who are going to be making design decisions long after I've left. So my goal is actually to ask questions, to start a conversation, and to get folks thinking about their programs from a perspective or in a way that they hadn't before. And my favorite moments, right, the days where I'm like, oh, that was a successful day, are when I'm in conversation with a program manager or their team, and right in the middle of the conversation, somebody just pauses and is like, oh, oh, I hadn't thought of that before, and then starts frantically writing something down because there is something in that question that highlighted something that's going to be useful to keep thinking about. One thing that I keep going back to is that there is nobody here who isn't interested in solving a problem, right?
And they're interested in solving big problems, and they wouldn't be working on something if they didn't think it had any implications. Sometimes they're so focused on their particular problem, they haven't thought about some of the potential side effects of their solution. And most people I've talked to are just very curious about someone else's perspective on their problem and their solution and are excited to think creatively about it. So that has been honestly like one of the more surprising things since coming here was I just expected folks to be a little bit more defensive, to have a little bit more time that was going to be necessary in building relationships before people were willing to, you know, who's this upstart person come in to talk to me, right? And by and large, folks have just been really curious and really willing to engage in these conversations, and some of them have taken some of the things we've talked about and run with them, right?
Designed whole aspects of their program around considerations or effects that we identified in our Elsie conversations. So already seeing that our conversations can lead to attempts to do things differently within a program. So if I'm in a conversation with somebody who's working on human machine teaming, I'm going to have questions about, have they thought about the interface, have they thought about automation bias, have they thought about how to minimize the risk of over trust in the system, right? And that's going to be a completely different conversation than somebody who's working on plants that can sense chemical threats. And there you're going to be having a conversation about biosafety.
Bio containment meant biosecurity. So there are certainly thematic questions that change depending on the program, zooming out even further, trying to raise questions that encourage thinking more broadly about the program's impacts. With around 100 program managers running over 200 programs spanning the breadth of the technological landscape, from exploring fundamental questions of physics all the way to building experimental experts. There's not a universal rule set for Elsie considerations. Every day is different, and I expect that to be true through the course of the year, roughly.
My plan for the first month was to meet as many people as I could and to learn as much as I could about what did people here think Elsie was, and what did people here do, what did people here need. So I talked to as many program managers, as many sedas, as many office directors and deputy directors as I could to just sort of get a sense of what were times where they had had conversations around what I would now call LC issues, where that was productive and what were programs where, oh man, they really wish they had talked about something. Earlier. I had a really interesting conversation with one program manager who had, by all accounts, a successful program, right. Technological challenge was met, transitioned for use by the army, and the main goal of that particular program was create a cheap and easy way to facilitate communications and communications, communications deprived environments.
And he told me one of his biggest regrets was that he had not thought earlier in the program's design stages about the fact that this could have been hugely useful for addressing broadband equity issues. And it's not that it wasn't a successful program, and it's not that that can't still be pursued. But if he had been thinking about that, had that conversation, and those thoughts had gone into the design process or the transition plan, it could have been a much more useful technology for more populations, for different groups of people. And so based on those conversations, my next step for the next two months was to start to put together a process of minimally invasive, maximally useful process for ensuring that every program that's in the development stages has an opportunity to pause and think about LC considerations and what that looks like for every single program is going to be completely different. There's a line about one size fits none, but there's a couple of thematic questions that are relevant no matter what type of program is being considered.
And so my goal for these two months has been start to develop that program, start to put it in place, start to introduce it to folks and have people get accustomed to it. And I've thought, okay, if I end this year, and if I have developed a process by which every new DARPA program has a moment to pause and think about Elsie and that's built into the structure, I wouldn't exactly say I failed, but I've done the minimum I came here to do. And so my goal going forward in conjunction and in conversation with a lot of other people, I don't want to pretend this is just me, is to really change the way folks think about what Elsie is and ensure that people don't think, oh, yeah, it's the job of the Elsie visiting scholar or the LC SiTA, or this one time conversation to address all the LC issues with the program. But rather, it's a perspective, it's a mindset, it's a muscle that you can exercise over the course of a program's life cycle to keep asking, okay, what are other unknowns that we should be thinking about? What are opportunities we're not considering?
What are risks that we should be aware of and designing for. It's both humbling and also freeing to realize that this is an inherently impossible task. Right. Nobody writing Internet protocols at ARPA back in 1969, well, they might have foreseen maybe the rise of email, right, and file sharing, but they didn't foresee the rise of social media platforms and citizen journalism and clickbait and deepfakes and everything else that the Internet enables. But that doesn't mean it's not possible to identify some things ahead of time, right?
The fact that you can identify everything does not mean that it's not worth taking time to pause and think about what are some foreseeable misuses? What are some areas that might be a potential vulnerability or a potential risk? What is some beneficial use of this technology that we hadn't thought of before, and then make program and design choices to maximize the benefits and minimize the risks. And my longer term goal is for everybody to be adopting this perspective and approaching conversations around programs, taking a moment to think more broadly about their implications. Which it's important to note, is not to say that those conversations haven't been happening already.
Tom Shortridge
They just might not have been done in a more formalized, deliberate context. A lot of what I'm talking about when I'm encouraging folks to adopt an LC perspective is to make explicit what is already happening implicitly, that people here are already thinking about some implications, people here are already thinking about risk factors, people are already thinking about vulnerabilities, they're already thinking about potential needs for countermeasures. But it happens haphazardly, right? It happens sporadically. And part of the goal of creating a little bit of an LC process for each program is to create a space, one, to make it explicit and two, to do it better and in a more thorough, more considered way.
A guiding principle in the creation of programs at DARPA is the Heilmeier catechism, a series of questions developed under former DARPA director George Heilmeier in the 1970s. These questions are often referred to within the agency as the hqs. And from a certain point of view, one could argue that theyve always included LC considerations. These questions of the HQ include, what are you trying to do? And who cares?
Speaker B
If you are successful? What difference will it make and what are the risks? You can take a narrow read of the HQ and say, oh yeah, what are the risks to this program? Oh, not being able to get enough money for this program. If you think about those questions broadly, that is taking an Elsie perspective.
So again, I think a lot of what I'm doing is trying to make explicit what's happening implicitly for this to become a pervasive component of R and D for new technologies. Right. Thinking explicitly about not just can we. Right. And people often say, well, is Elsie just the should we part?
And I think Elsie is actually also the how should we. It's not as clear cut as should we do this or not? It's how should we pursue doing this? How should we think about this? How should we set metrics?
How should we define success? How should we make design choices? Right. All these other aspects. That's actually much more the Elsie question than just the should we at all.
Tom Shortridge
Speaking of design choices, Elsie can do more than merely advise existing or even nascent programs. One thing that folks don't realize always is that LC doesn't necessarily just need to be something that's attended on a program. It can also be the inspiration for a program design. And so this is a case of a PM doing LC without necessarily thinking about it in those terms. But he was interested in the fact that augmented reality systems have all of these sources of risk and vulnerability, right?
Speaker B
Both in terms of being fed bad data and also in terms of potentially being used to induce nausea. And there's also a lot of interest in military applications of augmented reality. Right. It could be a really great way for somebody to identify what buildings are on a no target list, for example. But if the data being used for that is corrupted in some way is inaccurate, that could be hugely problematic.
And so he has developed a program around identifying and figuring out if there are ways to construct and design augmented reality systems that eliminate these types of risks. DARPA hard, definitely. But that would be the kind of program that I would think of as being inspired by LC concerns, right? Identifying a risk as the problem and thinking through, okay, how do we do this better? And, oh, man, there are so many technologies.
And if people had thought about this and designed it in from the get go, how many problems would we have avoided with the Internet of things if they had been designed to have security as part of their system, right. One of my favorite lines is the S in IoT stands for security. How many problems we would have avoided had that been designed in from the beginning? And so I see his program as addressing that other programs addressing risks of suicide, issues associated with lack of sleep, all of those I think of as animated by LC concerns. So part of LC is identifying the risks and the negative consequences.
Part of LC is identifying the opportunities and the positive consequences. There's also thinking through unknown impacts this one program that was intended to create a magnetic hydrodrive to power underwater vehicles. This new technology is going to create a magnetic field. And the program manager is requiring the performers to measure how far out that magnetic field extends. Because one of the unknowns is, does it extend out a couple centimeters, or does it extend out a couple kilometers?
And unless you know how far it extends out, you don't know if it's going to have an impact on local sea life, if it's going to have an impact on sea mines in the region, if it's going to have an impact on other maritime infrastructure. And so that first unknown is important to identify early in the program's design stage, so that you get information that allows you to then evaluate, okay, what are the potential impacts of this? And then once you have that data, you can make further decisions about researching its impact on the environment. Thinking about potential uses ahead of time can expand the ways in which useful technology can actually be employed. Every program solves a problem, and I think part of the goal of the Elsi conversation is taking time to say, and what other problems could this solve?
How else might this be used? Successful DARPA programs have a history of impact well beyond the agency, and the LC initiative has no lesser ambitions. A long term goal for the LC program at DARPA is to have this perspective not just be limited to the PM's, to the sedas, to the leadership at DARPA, but rather something that percolates out into the broader R and D ecosystem that, you know, working with the innovation fellows who are then going to go out and go back to their universities, go back into academia, talking about having an Elsi perspective. Right. Working with the service chief fellows who are going to go back to the services or on to other aspects of their lives and be aware of this Elsi perspective.
So part of what's fun about DARPA is the high turnover and the opportunity to have the ideas here percolate out. As Rebecca alludes to, DARPA program managers and office directors are, by design, here only for a limited time. So they're the folks who are coming into DARPA and then going out again. And hopefully, we'll take the concept that it's useful to adopt an LC perspective with them, and that will percolate out that way, but also with the performers. Any given program might have an LC component, or the program manager might ask the performers to report in on LC issues or identify how they're going to address specific LC concerns.
And so I think there's already a conversation that is happening maybe some folks are calling it LC and maybe some folks aren't. But the point is, people are talking more about the side effects and the consequences and the potential beneficial uses that maybe weren't thought about otherwise, as well as the potential risks and concerns that maybe weren't being thought about otherwise. That hope of broad LC influence and adoption across the vast technology development landscape is buoyed by the wide ranging nature of research initiated and enabled at DARPA. It is so exciting to see so many dedicated, intelligent folks thinking so broadly and so creatively about so many hard problems. And there's an aspect to my role where I just get to flit around to a certain degree, right, and learn a little bit about this program and a little bit about that program.
So I get a bit of a birds eye perspective on the sheer amount of different challenges that DARPA folks are tackling. So on any given day, I will go from a conversation about patching a cybersecurity vulnerability to one about turning astronaut waste into something useful, to one about capturing heat waste and converting that back into usable electricity. So it's just an incredible amount of different types of problems that DARpa folks are tackling. And at the same time, of course, it's DArpA. So people are also working on developing new weapons capabilities, and they're working on developing other capabilities that facilitate success in warfare, like AI, human teaming, and dogfighting.
So I don't want to sound too starry eyed about it, but I have been really, really taken and impressed by how deeply folks here care about doing things well. That's all for this episode of Voices from DARPA, but that's not all for Elsie. We'll continue to explore this topic throughout upcoming episodes, taking a deeper dive into specific programs and gathering more perspectives. Special thanks to Stacy Wurzba for her assistance in producing this episode, and thank you for listening.
Tom Shortridge
Special thanks to Stacy Wurzba for her assistance in producing this episode, and thank you for listening.