Overview
In 2020, the Department of Defense (DoD) became the first military in the world to adopt ethical principles for
the use of AI in military operations. The CDAO leads DoD efforts to operationalize its commitment to Responsible
AI (RAI) by creating policy, tools, and training for DoD components. The CDAO also participates in interagency
and international RAI efforts, and shares best practices across organizations to ensure Government personnel can
assess whether AI systems are safe, reliable, and trustworthy.
AI-enabled capabilities are advancing rapidly, providing powerful new possibilities for prediction, automation,
content generation, and accelerating the speed of decision. These technologies are already transforming the face
of society and warfare. Whether such transformations will have positive or negative effects on our nation
depends on whether or not these technologies are designed, developed, and used responsibly. A responsible
approach to AI means innovating at a speed that can outpace existing and emerging threats and with a level of
effectiveness that provides justified confidence in the technology and its employment.
The DoD has been at the vanguard of responsible AI work for years. We do not develop or deploy AI capabilities
unless they are aligned with DoD's AI Ethical Principles. The Responsible AI Toolkit operationalizes those
principles, enabling DoD personnel to identify and mitigate risks at every stage of the product development
lifecycle.
The CDAO is responsible for building the RAI technical and workforce capability for the DoD, driving
organizational accountability to ensure adherence to DoD AI Ethical Principles, and fostering broader RAI across
industry, academia, and international partners and allies.