Tech

Pentagon has some weak new guidelines for ethical AI

You gotta love when the DoD is vague.

Anadolu Agency/Anadolu Agency/Getty Images

On Monday, the Department of Defense dropped “a series of ethical principles for the use of Artificial Intelligence.” The lukewarm mixtape samples recommendations from the Defense Innovation Board. The principles will apply to combat and non-combat AI applications, but no word on exactly how they’ll be enforced with the weaponized portion of the Pentagon’s AI operations.

It’s all in the language — The wording of the guidelines doesn’t offer much in the way of specificity. The five principles promise the department’s use of AI will be responsible, equitable, traceable, reliable, and governable. The descriptions of each read similarly to memos from companies criticized over a lack of diversity or transparency. Hot phrases like “take deliberate steps” mix with a long-winded explanation of off-switches and the notion that bias, as well as consequences, are always unintended.

The five principles are listed in full below:

  • Responsible. DoD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities.
  • Equitable. The Department will take deliberate steps to minimize unintended bias in AI capabilities.
  • Traceable. The Department’s AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation.
  • Reliable. The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life-cycles.
  • Governable. The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.

“I worry that the principles are a bit of an ethics-washing project,” Lucy Suchman, an anthropologist specializing in the role of AI in warfare told the Associated Press. “The word ‘appropriate’ is open to a lot of interpretations.”

Air Force Lt. Gen. Jack Shanahan told AP the vagueness is to protect the military from having to stick to outdated guidelines in a quickly evolving sector of technology. Shanahan, director of the Pentagon’s Joint Artificial Intelligence Center, also said the guidelines position the U.S. on the moral high ground compared to Russia’s and China’s AI initiatives. Because that’s where the bar is for technological ethics right now.