AI Ethical Principles – Highlighting the Progress and Future of Responsible AI in the DoD

  • By: JAIC Public Affairs
AI Ethical Principles – Highlighting the Progress and Future of Responsible AI in the DoD

On February 24, 2020, the U.S. Department of Defense officially announced the adoption of ethical principles for the use of Artificial Intelligence following recommendations provided by the Defense Innovation Board. The DoD’s Ethical Principles for AI apply to both combat and non-combat functions and assist the U.S. military in addressing the novel ethical ambiguities and risks associated with the use of AI. These principles encompass five areas and guide DoD AI capabilities to be responsible, equitable, traceable, reliable, and governable. The principles are intended to prompt and promote continual engagement of AI ethics as a process distributed throughout the entire AI lifecycle both interactively and iteratively. Over the past year, the JAIC has made meaningful progress on its journey to guide the Department’s operationalization of the principles.

"Ethical principles are the foundation and guide posts for everything we do with artificial intelligence in the Department of Defense," said Lieutenant General Michael Groen, Director of the Joint Artificial Intelligence Center. "Our ethical baseline is the core of our trusted-AI ecosystem, and links us with international partners who share our values for lawful and ethical AI development. This first anniversary is not only an important milestone, but also a reminder of the important work we have ahead of us to fully implement AI principles into all AI development pipelines. Thanks to the diligent work of leaders across the DoD, we look forward to making even greater strides in 2021."

Building a Responsible AI Construct – People, Processes, Partnerships, and Policy

While AI ethical principles are focused on product lifecycle, to truly operationalize these principles, the DoD is taking a comprehensive approach that also considers and evaluates its organizational operating structures. This holistic approach, known as “Responsible AI,” examines people, processes, partnerships, and policy as a means to create an organizational culture and operating construct around responsible AI development. Ultimately, the goal is to build trust with end-users, warfighters, and the American public that DoD AI-enabled systems will be safe and will adhere to ethical standards.

Over the past year, the JAIC working alongside AI experts from across the DoD made notable progress toward the implementation of AI principles with the following initiatives:

  • Hired Ms. Alka Patel to help the JAIC lead DoD-wide Responsible AI initiatives.
  • Established and convened a DoD-wide, interdisciplinary Responsible AI Subcommittee that collaborates on advancing Responsible AI across DoD.
  • Piloted the Responsible AI Champions program with a cohort of cross-functional experts to understand the principles, identify tactics for operationalization, and seed a network of Responsible AI ambassadors. The goal is to roll this effort out to a broader DoD audience in the coming year.
  • Released the DoD Workforce AI Education strategy to cultivate an AI-Ready workforce that includes a requirement to incorporate Responsible AI training into every role (not just the technologists!) that touches AI.
  • Awarded an Other Transaction Authority acquisition vehicle and online portal called “Tradewind” that will enable the DoD to engage with partners who provide commercial services and entities not only on technical and development aspects, but also with respect to ethics and responsible AI.
  • Announced a Testing and Evaluation Blanket Purchase Agreement that is designed to help DoD and Federal Government Agencies procure testing and evaluation services are critical to operationalizing AI ethical principles.
  • Stood up the AI Partnership for Defense with 13 like-minded military and defense forces that aims to promote the responsible design, development, and use of AI by engaging in coordination and collaboration on AI technologies, governance, and policy to drive AI-enabled interoperability.
  • Began working with DoD safety communities to understand, develop, and create ways to adopt and instantiate AI principles into safety practices, guidelines, and policies.

“Inculcating Responsible AI across the DoD positions our military to establish norms for responsible design, development, and use of AI within the framework of our democratic values. It enables us to earn the trust of the American public, industry, and the broader AI community to sustain our technological edge and along the way, fortifies our international partnerships with allies that share our values. All of this furthers the DoD’s mission to strengthen national security and increase mission effectiveness,” said Ms. Alka Patel, lead for Responsible AI at the JAIC.

The adoption of the Department of Defense’s Ethical Principles for Artificial Intelligence was one step along a larger journey. Moving forward, the JAIC will continue to lead, coordinate, and collaborate with DoD partners to set the conditions for an ethical AI culture. Some areas of focus include increasing Responsible AI literacy across the DoD’s workforce and integrating Responsible AI processes into the Joint Common Foundation’s platform architecture as a way to integrate and automate tools and corresponding assessment points into the DevSecOps environment. The JAIC will also continue to work across government, with the American tech industry and academia, and with allies to advance dialogue and cooperation on AI ethical principles.

The DoD’s commitment to safe, ethical, and responsible AI has generated helpful insights and observations from experts across the government and the private sector.

Mr. Steve Blank, Adjunct Professor of Management Science and Engineering at Stanford University, said the DoD’s adoption of AI ethical principles is an important milestone that will guide the responsible and ethical implementation of AI-enabled capabilities. "The ethical principles of Responsible, Equitable, Traceable, Reliable, and Governable will ensure we build AI that makes us smarter, safer, and more secure."

For more information, please see the recent JAIC Perspectives Video featuring Ms. Alka Patel, JAIC’s lead for of Responsible AI.