Primary Topic
This episode delves into the integration of ethical, legal, and societal implications (ELSI) in DARPA's technology programs, particularly focusing on how these considerations shape research and development.
Episode Summary
Main Takeaways
- ELSI considerations are crucial in the initial stages of technology development to foresee and address potential ethical and societal issues.
- DARPA's SafeGenes program exemplifies the integration of ELSI in biotechnological research, particularly focusing on the ethical management of gene-editing tools like CRISPR.
- The episode emphasizes the importance of collaboration with ethics experts to guide technology development responsibly.
- DARPA is working towards embedding ELSI considerations in various technological fields, including AI, autonomous systems, and military technologies.
- The discussions highlight the broader impacts of ELSI integration on policy-making and future technology applications.
Episode Chapters
1. Introduction to ELSI
Overview of the ethical, legal, and societal implications in DARPA's projects. Discussion starts with the genesis of the ELSI concept from the human genome project. Tom Shortridge: "As the ELSI concept was born out of the human genome project, it's fitting that we begin this conversation with an example from DARPA's biological technologies office."
2. SafeGenes and CRISPR
Focuses on the SafeGenes program's efforts to manage gene drives and off-target effects of CRISPR technology in gene editing. Leanne Parr: "The potential for off-target effects was acknowledged, but not often quantified by the labs who were trying to rapidly increase and facilitate genome engineering for a variety of purposes."
3. Broadening ELSI Applications
Explores how DARPA integrates ELSI across different technological domains, impacting policy and operational approaches in military and civilian contexts. Phil Root: "LC has been incredibly important to the work that we're doing in the strategic technology office because as we look for new areas of competition, we realize that there are non-standard implications."
Actionable Advice
- Engage Ethicists Early: Involve ethicists and legal experts at the initial stages of technology development to guide ethical considerations.
- Anticipate Unintended Consequences: Proactively consider potential ethical and societal impacts of technologies during the design phase.
- Implement Rigorous Testing: Regularly test technologies for ethical compliance and societal impact, particularly when involving gene editing or AI.
- Educate Teams on ELSI: Ensure all team members are informed about ethical, legal, and societal implications relevant to their projects.
- Document Ethical Processes: Keep detailed records of all ELSI considerations and decisions throughout the development process to enhance transparency and accountability.
About This Episode
In this episode, we’ll be taking a deeper dive into ELSI – ethical, legal, and societal implications of new technologies and capabilities – and specific examples of how DARPA programs have incorporated those considerations into their structure.
We’re highlighting three examples of how DARPA integrated ELSI throughout the program lifecycle via the counsel of experts from the medical, scientific, legal, and ethics communities to assist program managers and performers in identifying and mitigating any potential issues.
The first program, out of our Biological Technologies Office, is Safe Genes, which supported force protection and military health and readiness by developing tools and methodologies to control, counter, and even reverse the effects of genome editing—including gene drives—in biological systems across scales.
The second program, Urban Reconnaissance through Supervised Autonomy (URSA) from our Tactical Technology Office (TTO) aimed to enable improved techniques for rapidly discriminating hostile intent and filtering out threats in complex urban environments.
And, finally, the current In the Moment program in our Information Innovation Office (I2O) seeks to identify key attributes underlying trusted human decision-making in dynamic settings and computationally representing those attributes, to generate a quantitative alignment framework for a trusted human decision-maker and an algorithm.
People
Stephanie Tompkins, Leanne Parr, Phil Root
Companies
DARPA, Massachusetts General Hospital, University of California
Books
None
Guest Name(s):
None
Content Warnings:
None
Transcript
Stephanie Tompkins
Coming to DARPA is like grabbing the nose cone of a rocket and holding on for dear life. DARPA is a place where if you don't invent the Internet, you only get a b. A DARPA program manager quite literally invents tomorrow. Coming to work every day and being humbled by that. DARPA is not one person or one place.
It's a collection of people that are excited about moving technology forward. Hello, and welcome to voices from DARPA. I'm your host, Tom Shortridge. If you haven't already, we recommend first listening to our previous episode, introducing Elsie. On this episode, we'll be taking a deeper dive into Elsi ethical, legal, and societal implications of new technologies and capabilities, and specific examples of how DARPA programs have incorporated those considerations into their structure.
Tom Shortridge
As the Elsi concept was born out of the human genome project, it's fitting that we begin this conversation with an example from DARPA's biological technologies office. Here's DARPA director, Doctor Stephanie Tompkins. We had a program called SafeGenes a number of years ago where we were focusing on how you would undo the effects of gene drives if things went wrong. In normal mendelian inheritance, which is how we normally inherit our genes, the offspring can inherit from either parent, and there's a 50 50 chance that the offspring will receive a trait from either parent. What happens with a gene drive is it forces all of the offspring to receive that gene that's driven through the population 100% of the time.
Leanne Parr is a Sita, or scientific, engineering and technical assistance contractor, in the DARPA Biological Technologies Office, or BTO. She joined the agency as the safegenes program was being formulated and supported the program throughout its life cycle. When Safegenes was announced in 2016, biotechnology was rapidly adopting a new genome editor that we called CRISPR, making its way into many types of biological research. The potential for off target effects was acknowledged, but not often quantified by the labs who were trying to rapidly increase and facilitate genome engineering for a variety of purposes. Off target effects is the idea that when a gene editor makes a cut, so a tool that's like CRISPR would come in and cut the DNA.
Phil Root
Much like when you're trying to use your scissors to cut paper and you've drawn a line that you're trying to cut along, and you're trying to make it as straight as you can, sometimes it goes off the mark, right? We've all tried to wrap presents, and you get that, like, awkward cut around the edge, and then you're trying to tape it and paste it back together, the body does that, too. And so as the body is trying to realign those cuts that it made with the genome editors, it doesn't always do that correctly or perfectly. Every time. The misalignment or misreading, that is what we call the off target effect, something that, you know, sometimes you can predict the off target effects, like when you were lining up the paper and you realize that your pattern doesn't match, but sometimes you can't anticipate where that's going to happen.
The off target effects can be really small and not impactful to the body, but they can also be much more severe. And that was something that we wanted to anticipate and prepare for and control against so that we could have safer biological applications and protect against the accidents or the potential for misuse of those genome editors. To be clear, safegenes did not look at driving genes through a human population. We were looking at mosquitoes. The thing about safe genes is our goal was to develop a toolkit and also a paradigm shift to incorporate control into gene editing development.
In line with that, we had lots of little different kinds of tools, different types of control for genome editors, different types of reversal for genome editors, but also other things along the way. Programs like safegeens don't look at Elsi in a vacuum. They seek the counsel of experts from the medical, scientific, legal, and ethics communities to assist the program managers and performers in identifying and mitigating any potential issues. That program had an LC panel, ethicists and biological scholars, and folks who could really help all of the teams work through the thought process as they were developing technology and say, is this the direction we want to go in? Or if we do go in this direction, what kinds of things will we need to understand?
The role of the LC group was to serve as unbiased external experts to provide feedback to both DARPA and the researchers on any concerns or evolving issues that might impact the research, the way it was conducted, plans for pursuing it, or the transition opportunities. At the end, the Massachusetts General and University of California teams from Safegenes published their ethical codes, but also the processes of community engagement through focus groups and interviews and surveys and workshops with subject matter experts in the gene drive field and community engagement fields in order to help incorporate LC approaches into their research design and technology development. So the outcomes from Safegeans LC work, including the ethical codes developed by the teams and the publications about the processes that they went through, integrating LC with their research design and informing their technology development, can be used more broadly by DARPA's transition partners as they pick up the technology and continue moving it forward, but also by other emerging technologies, areas that are looking to integrate LC into their research design and into their technology development. They can use those lessons learned that have been documented through publications and through workshop reports in order to implement their own LC approach and adapt it to their own research needs. One example of transition from the Safe Teens program was a spinoff company from Massachusetts General Hospital who had identified a way to predict and analyze off target effects in individuals and in populations of humans.
And so the spin off company, Securedx was able to commercialize this assay, which is a series of tests involving sequencing of DNA and countermeasures to enable researchers and developers of different types of drugs, like cancer drugs, to determine if there are going to be off target effects of the treatment to the individual or potential risk to the population. Depending on how you look at it, off target effects can actually be another way to define Elsie. It's basically looking at the research potential from multiple angles to identify potential issues down the road. We got a lot of questions when we were scoping the safe jeans program about why we needed to fund LC engagement activities as part of the team. It was relatively new.
It was only two years after the National Academy's 2014 report on ethics in emerging security technologies, and the BTO program office was only a couple years old as well. It was still kind of getting its legs underneath it and trying out different approaches. Before 2016, there hadn't been a lot of publication about Elsi as part of a research topic. Safegeans and other programs around the same time were just starting to mention Elsie in their special notices, in the proposers day announcements, in the broad agency announcements. It was all kind of starting around that time.
So it was really exciting to be trying something a little bit different and integrating it into the research and encouraging the research teams to bring them forward. We got a lot of questions about that during the proposers day and during that sort of selection process. We got a lot of questions about teaming from proposers, kind of being like, oh, we can do that. Really? Can we really provide support for our team members to be able to engage on these activities?
And it was very enthusiastic. Yes, please do, please encourage your team to actively engage here and provide enough resources for them to be able to do that. And that's something that we've seen more and more in both biological technologies office as well as other offices at DARPA. The safegenes programs incorporation of ELSI was something of a watershed moment, opening the door to new possibilities that could spread throughout the agency. The most visible of those is the visiting Elsie scholar position as explored in our previous episode.
Tom Shortridge
But others have permeated tech offices and thought processes.
Here's Doctor Phil Root, director of the Strategic Technology office. LC has been incredibly important to the work that we're doing in the strategic technology office because as we look for new areas of competition, we realize that there are non standard implications, implications that we did not see in our traditional or typical portfolio that require further perspective. And so we're thinking about LC from the beginning, because as we go into new fields of research, we really want others looking at these from a multitude of perspectives. Doctor Matt Turek, deputy director of the Information Innovation Office, explores the considerations of that office. We work in AI, we work in secure and resilient systems.
Stephanie Tompkins
We work in tools for the information domain, we work in cyber. And those are domains where there may be fraud issues. It might be how do you use, or can you use publicly available information? How do you use commercially available information? How do you carry out human subjects research?
Those are around maybe the foundational elements of the research process. But then there's also what is the capability that you're creating? What might be some of the unintended consequences? What might be side effects? Who is that capability designed for?
What happens if it becomes more broadly used? Those are all questions that I feel like fit under that umbrella of ethical, legal, societal implications. I don't think there's a crisp definition, and I think that definition changes based on the context. Another important point is that you don't have to be an Elsi scholar to consider those implications. As noted by Doctor Bart Russell, deputy director of the Defense Sciences office, I.
Bart Russell
Don'T have any formalized training in ethics or in any sort of legal training. I kind of backed into it because so much of the work I do is right at the nexus between humans and autonomous or AI enabled capabilities and platforms, that it's almost impossible to work in that space and not think directly about the ethics and the implications of the design choices you make on how those systems are used in various contexts and scenarios. Most notably, I have a big interest in human systems interfaces and how you build an interface to enable the operator to make the best decisions with that tool so it can extend that operator's capability to maximum effect. And that inherently has some questions in it. How do you augment decision making in the best possible way?
Tom Shortridge
An important component of decision making for the military is determining the commander's intent. Commander's intent guides everything within the military, certainly my service in the army. And what it means is, what are the bounds for acceptable behavior as defined by the commander? Commander's intent can be encoded pretty easily for humans, human to human. I can say something to a subordinate, and they pretty much understand what that means in terms of bounds.
Matt Turek
You cannot do the same thing for a software program. You can't anticipate, you can't expect that you say the same thing to a software program, and it understands in some ways those same bounds. It didn't grow up in the same ethical norms, grow up with, of course, being an anthropomorphizing of an AI or some algorithm. And so commander's intent is a key to the US military and a vulnerability to algorithms, because you have to be so explicit in ways that you never have to. Typically.
Tom Shortridge
During his time as a program manager in the Tactical Technology office, Phil launched the urban reconnaissance through supervised autonomy program, or URSA, which Bart later took over. Ursa, in many ways, was sort of the first time my eyes were open to how early we should be and can be thinking about ethical, legal, societal implications of any technology we build. This is a program that his whole purpose is to ask the question, can we use a network of distributed and mobile sensors to improve the ability of our operators to identify threats who are hiding in and around civilian spaces? This mixing of bad guys and people we wanted to protect. And we realized that any use of autonomy in that space could get it wrong.
Matt Turek
And our hypothesis was, what we really wanted to do was try to use autonomy to differentiate the two. It became very clear that anything we did was going to impact civilians in a way that was unpredictable. And we did not like. I did not like that unpredictability. So we wanted to get ahead of it right from the jump, did essentially.
A minor dissertation in Elsie, and came up with this binder of, here's what's been done in the past, and here's what we think we should do. And so we proposed a different format at the time, which was feed forward. Bring in the Elsie scholars from the beginning, think about the problems, try to anticipate the engineering problems from the beginning, and then give the engineers some time to implement that before they found the problems on their own. We wanted to think about the problems first and then give potential solutions to engineers. It was largely inspired by the way it systems do this in security.
Bart Russell
If you try to patch it systems after they've been built and say, let's make it secure, now it's finished, you end up with a product that is neither secure nor is it actually very functional. You usually constrain the functionality of that system considerably as a result. Instead, the way to do it now is you build security in at the ground up in the paper to prototype phase of an IT system, and you end up with something that is quite secure, and you end up with something without compromised functionality. We thought, what if we did the same thing for ethics? What if, just like Devsecops is for IT systems, we do dev ethops for our autonomous system with the UrsA program that maybe we could build in what later became the RAI principles, responsible AI principles, in at an early stage, that maybe we wouldn't end up with a constrained system, and maybe we could better align that system with our ethical principles.
Let's treat it like an engineering problem, and we'll take what then became the responsible AI principles and align them and say, from an engineering perspective, how do we take equity, transparency, and how do we derive features in the software architecture that will make our system align with those principles? Every time we sat down with our test operator and said, okay, this is your base system and this is your, quote, LC system. This is how they differ. These are the different features. He said, that is not an ethical feature.
I need that to do my job. And then we said, okay, we'll put that feature into the base system. That happened for every single one of our ethical, quote, ethical features. Until we did not have two different systems. We had one LC embedded system.
But light bulb went off for me, that ethics is not an other, and that the performance system is the one that allows our operators to act in line with their training, which is as a professional soldier. And the warrior ethos is that they can undertake these acts of, in some cases, extreme violence for a greater purpose. Right? And that doesn't work if it's not in line with the laws of armed conflict, if it's not in line with international humanitarian law, and if it's not in line with the rules of engagement. And any system that doesn't extend that capability is not the performance system.
Tom Shortridge
Elements of the ElSI components of the URSA program helped inform the development of Matt Turek's in the moment, or ITM program. One of the best things about DARPA is interacting with the other PM's and pinging ideas off of one another. Matt and I had a lot of early conversations about this, and he was heavily involved in understanding some of what we did on UrSA as he was architecting the ITM program. So ITM is in the moment and that is a basic research program. So that means that we are focused on very foundational issues, and it is essentially looking at a core trust issue, which is, are humans willing to delegate life or death decision making authority to an algorithm?
Stephanie Tompkins
And we're studying that in the context, in the setting of battlefield triage. So you could imagine in potential future conflicts if casualty numbers are high and we are overwhelmed, having autonomous systems that can help with those medical decisions, either fully autonomously with human on the loop, some variation could be very useful in those settings. There's a core hypothesis of the program, which is that algorithms that exhibit attributes of human decision makers that are trusted would be more likely to be trusted by humans, and so they will make the sorts of decisions that we ourselves would make, and thereby we'd be more willing to delegate the decision authority and very challenging problems to the algorithms. So part of the program is focused on finding what are those attributes that lead to trust? How do I get information about those attributes from humans?
How do I model that in a quantitative way? Creating the technology for comparing one decision maker against a reference group of decision makers that are trusted, and using that to build more human aligned algorithms. So all of that naturally involves human subjects research, but there's significant implications there around ethical issues, legal, societal. And so part of the design of the program was to have a team that looks at those LC considerations and actually embeds discussions and process into the development of the technology itself. That team actually has two pieces, one that is more inward facing, that informs those LC discussions on the program, and another team that's more outward facing to interact with the policy community.
We want to be informed by current policy, but we also want to help inform future policy. If the technology is successful, what are the implications for future policy? What should future policy makers in the DoD or more nationally, be thinking about in the context of automated medical decision making and humans turning over essentially life or death authority algorithms? As a research and development agency, DARPA does not make or set policy. But we do like to offer better options for people who do make policy.
Bart Russell
The responsible AI principles are useful in that they have laid the groundwork for what we as engineers and as the R and D community can then do. But they don't define how to engineer. They don't tell you how to do it right. So, is the system responsible? Is it equitable, is it transparent?
Is it traceable? And is it governable? Those are the five principles. They are all specific in the language, but just like any policy piece, there's a lot of wiggle room. And how do you measure whether a system is responsible is a very big open question.
When we can provide concrete ways and frameworks built on law, built on precedents of how to do that concretely, it gives the policymakers a much more concrete idea about how they can implement safely, how they can construct their policy in ways that helps the whole area move forward, as opposed to saying what we should stop doing. And so that's part of the reason for the policy outreach. And it's still very much unknown as to whether we'll be successful or not. But even having those conversations with the policy community has an impact itself. And DARPA has unique convening authority to bring together organizations.
Stephanie Tompkins
We're doing regular LC outreach meetings. Again, in terms of that policy piece, the last meeting had about 80 participants from 60 us government organizations and 17 people, I think, from international countries. One of the conversations that we had with uniformed Services University, which trains about a quarter of military medical physicians, was around the need to potentially do what's called reverse triage in the context of future conflicts. And reverse triage is an explicit decision to try and return people to the fight as quickly as possible over saving lives. And that might be because we are in a military conflict, where overall the most important strategy is to be able to persist and be successful in that conflict versus individual decisions about human lives.
I can't imagine being in that setting myself. I'm thankful I'm not in the medical community, but this is one of the challenges that they're struggling with. And that's just one example of some of the very challenging decisions that humans are faced with. If we ever want to move to automated decision making, we're going to want to have AI algorithms that represent the sort of decisions that we want made. And it's not just about the medical competency piece.
And for sure, that is the domain of ethical, legal and societal implications. And we need to be thinking about those elsi concerns and considerations in the development process. We're starting to get developers in the habit of identifying Elsie issues, of explicitly recording the decisions and why they chose a particular development path and how that was informed by Elsie. And that might not seem important, but think about that as collecting a body of evidence over the development process for the sort of system that you've built and why you've built it, and how it was informed by these other considerations. And I think that's in service to much more transparency around our AI systems.
We talk about AI algorithms being transparent, but what this is headed towards is more transparency in the development process itself, and again, being thoughtful in the design process and structured in our thinking about those LC issues and having a record of them, and again, being able to provide that as a body of evidence. As you know, in some future world, we might consider developing an operational system and using it for operational purposes. So those are practical problems that we're looking at now in terms of how do we do Elsi in some of these very challenging spaces, and what our lessons learned that we can share with. The broader community with each of the programs weve explored in this episode. Safe jeans, UrSA and in the moment, there have been Elsie groups or panels advising and in some cases embedded into the very fabric of the program.
Tom Shortridge
But those types of advisory groups are not necessarily a part of all programs at DARPA. Heres Doctor Rebecca Krutov, DARPAs first visiting Elsie scholar who we talked to at length in our previous episode to explain how Elsie is being integrated across such a wide spectrum of programs. We've been developing a process for every DARPA program that has a one size fits most component and a tailored component, though one size fits most part is to take time during program development to consciously identify the LC considerations that are raised by a particular program through an exercise that I and others have been developing that highlights different issues, the opportunities, the risks, the unknowns. The tailored component is how the program team decides to address those considerations. I'll make recommendations, but it's up to the program team whether to adopt or modify them.
Rebecca Krutov
Some programs will address LC by incorporating new metrics, say by requiring performers to ensure that they're trained on diverse or representative data, or to ensure that their proof of concept is interoperable with other relevant systems or extant databases. Others might change their program design, maybe by increasing the number of performers to avoid inadvertently fostering monopoly, or maybe by requiring performers to engage with the FDA or the EPA or other regulatory agencies to ensure that they're going to be on target for regulatory approval. And others, like safegeens or URSA or ITM, may make LC a major component of the program itself. And still other programs won't change a thing. Maybe theyre doing fundamental research, and theres no immediate likely impacts associated with the program itself.
But now the program team is approaching the problem with a greater awareness of the potential implications of their research, which might inform decisions they need to make during the programs lifecycle, or maybe future expansions that are based on a successful proof of concept. Every program here raises Elsi considerations. So, to quote from one member of an LC group, if it wasnt going to have any implications. Why should DArpa be working on it? Some of these considerations can be addressed early on by the program design or technological design choices, and some might only be relevant at proof of concept.
We're trying to design a right size fits most process that results in a tailored end product for each DARPA program. And what of the future of Elsie. At DARPA within the strategic technology office? We're growing in some new areas that we haven't explored before, such as economics, such as finance. And so there I wanted to zoom out from just a conversation of ethics to societal impacts and implications that are new to us.
Matt Turek
And so we really benefit from having very mature and thriving LC conversations within the agency. So we could come up with a concept and go to a community within DARPA and say, what do we think are the implications of this economic concept or this new program approach? And don't have to think of it just within Stowe, but rather can join a thriving conversation. We greatly benefit from that community across DARPA. I think there's going to be core challenges in how do we scale Elsie out across an entire organization like DARPA.
Stephanie Tompkins
And I think that's a problem that we haven't fully dealt with yet. I don't know that we know how to do that. I think that's one of the values of having the embedded Elsie scholar, is it allows us to get after that. How do we do LC at scale across an agency? And then hopefully maybe one of the disruptive DARPA things is, here's how to do LC at scale, and we could enable other organizations to do that.
It might not be core to industry missions, for instance, in the context of AI, but again, I think there's growing recognition about the implications of the technology space itself. And so maybe one of the core DARPA disruptions here is showing people how to do Elsi in a way that still allows people to operate on the, let's say, commercially relevant timelines that they need to, but build some more elsi rigor into their development process. We don't want this to just be a DARPa thing. The common question we get is, how do you know if you've done enough Elsie on a program? And the answer is technology is never finished.
Bart Russell
So LC is probably never finished either. That doesn't mean we can't scope the technology problem in our early R and D stages, and it doesn't mean we can't scope the LC at early stages than R and D. But that does require that there's a broader network of folks that know how to do LC on a broader scale. So then we pass off our technologies to whether it's a prime, whether it's industry, whether it is a service lab, that they know how to pick up that activity as well. So we are working really hard to grow the network so we can get a broader set of perspectives on our working groups, but also so more people will have their own ways of doing this independently of whatever DARPA does.
Tom Shortridge
That's all for this episode of voices from DARPA. We'll continue to explore Elsie on future episodes. Special thanks to Stacy Wurzba for her assistance in producing this episode, and thank you for listening.