A Crash Course for the Warfighter on Responsible AI: Who Cares and So What?

  • By: CDAO Public Affairs
A Crash Course for the Warfighter on Responsible AI: Who Cares and So What?

The following is the first in a series of perspectives on Responsible AI development from Soldiers, Sailors, Airmen, Marines, and Guardians across the DoD. The opinions expressed hereafter are solely those of the author and do not reflect the official position of the CDAO, Department of Defense or any U.S. Government entity. 

Captain Chris Dylewski is an F-15E pilot with the 494th Fighter Squadron at RAF Lakenheath. He studies DoD AI policy and works with the Chief Digital and Artificial Intelligence Office's Responsible AI team. 

Dear Fellow Airmen/Sailor/Soldier/Marine/Guardian, 

The DoD is embarking on its defense of the nation in the era of AI, and it’s going to require each of us to step up and help transform the Department. If we fail, to borrow a phrase from Air Force Chief of Staff General Brown, we will lose. 

Which brings me to maybe the most important part of DoD’s AI efforts: Responsible AI. (That’s RAI for you acronym lovers.) The DoD has committed to developing responsible, equitable, traceable, reliable and governable AI. 

Right now, you may be thinking something along the lines of “whoa, that was an awfully dramatic opening for a blog post about a niche DoD program I’ve never heard of. Seems like a bit much.” Fair enough. But, how’s this for a bit much: when it comes to preparing the battlefield for future conflict, this RAI effort is possibly the most important thing DoD is doing right now. 

But, we are getting ahead of ourselves. Let’s wind it back to the two critical questions for every service member: Who cares? and So what? 

Who Cares? 

You should. AI systems are coming to an acquisition cycle, logistics train, back-shop computer system, and battlefield near you ― in a hurry. You’ve likely heard rumblings about autonomous robotics, but many kinds of AI will shape the facts on the ground in the next conflict (not to mention the current one). AI systems are rapidly refining capabilities to listen for and understand human language, generate speech, text, images, video, even code at machine speed and scale, collect and process huge amounts of data, and supercharge cyber warfare exploits

You know who cares about this whole Responsible AI thing? Some of the most talented, highly compensated, and influential technologists in the world: top AI researchers and developers care. 

That matters so much because the U.S. Government has less control over this one. AI researchers, scientists, and engineers tend to work for organizations other than the U.S. Government. So, unlike with big military technology changes in the past, the Department of Defense is dependent on the private sector to share its superior technology and help us develop our own. 

And if DoD doesn’t develop this stuff responsibly, the private sector has made it pretty clear they won’t work with us. 

So What? 

So, this whole responsible AI development thing matters. But you’re likely no software developer. What can you do to help the DoD get responsible AI right!? 

(Thank you for asking.) 

The short version: if you get smart on AI, and keep an eye out for the kind of data-rich, narrowly-defined tasks that lend themselves well to implementing AI solutions, you can make a huge impact. Sprinkle in some work with developers, helping create responsible systems that work, and you’ll find yourself advancing the Department in what may be the only direction that leads to persistent combat capability in the age of AI. (You will also learn some ridiculously valuable skills for your post-military life, oh by the way.) 

Reminder: DoD RAI guidance demands AI systems that are responsible, equitable, traceable, reliable and governable. So that's what you're watching for. If you're working with a new system and it fails one of these tests, time to speak up. (For more on what exactly we're talking about with those words, check out the links below.) 

Since that was pretty meta, let's have an example, shall we? 

Let’s say you helped identify an area of military service where a ton of data is available—the promotion system. So you reached out to one of the DoD’s software factories and presto! A new AI system for choosing who gets promoted means that your service can save millions of work-hours trying to figure that out! Only there is a problem: the data is telling you the new system gives left-handed people big-time preference. Not cool, and neither responsible nor equitable. Time to speak up. Luckily, decent software development is inherently iterative, so changes on the fly are part of the process. 

P.S. If you have friends at Google or an AI startup, maybe mention to them that we in the DoD care a lot about developing AI the right way, and encourage them to work with us. 

Call to Action 

AI will likely change pretty much everything about the way the DoD has to do business if we are going to keep up with the private sector, and preserve capability relative to potential adversaries. That means it's time for all of us to start figuring out how AI can and should be employed, and start doing what we can to ensure that it gets built and fielded in a responsible way. 

Bonus: If you really want to nerd out, here are some resources on the following: what this whole AI thing even is and how it applies to military use-cases; some free classes on basic AI development; a bit of information on DoD's RAI efforts; and links to let you read about how Microsoft and Google are getting after this whole RAI thing (for your own personal compare-and-contrast): 

Artificial Intelligence and Machine Learning - Explained 

The Course List for the MIT-Air Force AI Accelerator’s Phantom Fellowship (you’ll need to create a free account through DigitalU) 

RAI Strategy and Implementation Pathway 

Defense Innovation Unit’s Responsible AI Guidelines 

Microsoft and Google’s RAI guidelines