The JAIC’s Test, Evaluation, and Assessment Team Shapes Future AI Initiatives
- By: The JAIC
Department of Defense leaders understand that artificial intelligence will likely change the character of future wars and conflicts. In the fog of combat, warfighters will rely on decisive, rapid, and agile AI capabilities to provide an operational advantage. The JAIC's Test, Evaluation, and Assessment team was established to assist the DoD in conducting rigorous and objective assessments of AI-enabled systems under operational conditions and against realistic threats. The goal is to ensure warfighters can trust the operational systems, and to provide military decision-makers with knowledge of the risks associated with these systems. Also, the team is establishing testing requirements for the safe and ethical development of AI-enabled systems with a goal of reducing the propensity for unintended consequences.
The JAIC’s TE&A team collaborates with the organization’s six mission initiatives, and the Joint Common Foundation, to perform algorithm, system, and operational testing on every AI product. The team is also working across the JAIC to establish standards, policies, procedures, and best practices to help ensure the security, effectiveness, and trustworthiness of leading-edge AI solutions. In addition to traditional system testing, the TE&A team aims to help the DoD mitigate the propensity for unintended consequences in AI-enabled systems by:
- Designing mission initiatives to mitigate ethical concerns from the outset;
- Ethically classifying operational regimes that drive design and evaluation considerations; and
- Supporting developmental testing that evaluates ethical system performance.
Recently, the team procured a T&E harness, a software that tests model accuracy and other metrics, for the JAIC’s Joint Logistics mission initiative. The software operates on the JCF and utilizes representative test data to evaluate vendor models, and make data-driven recommendations for model fielding. This provides a valuable opportunity to learn about and modify the uploading, storage, and updating process for test software on the JCF. Moving forward, the use of the test harness will expand to other mission initiatives, and will eventually be available to the DoD-wide test community.
The TE&A team closely collaborates with DoD, industry, and academic partners to study the best performance measures for the JAIC’s AI products. However, as TE&A has a vital role within the JAIC, the team hopes to attract additional testers and operations and systems analysts. “We are working with our Human Resources team to bring more qualified and diversely-talented AI and test professionals to our team,” said Dr. Jane Pinelis, the JAIC’s TE&A Chief.
The JAIC's Test, Evaluation, and Assessment team enthusiastically embraces its unique opportunity to make a difference in the way the Department tests current AI-enabled systems, as well as future innovative systems. Stay tuned to the “AI in Defense Blog” to learn more about the JAIC’s Test, Evaluation and Assessment team.