Microsoft Pitched the US Military on Using Azure OpenAI’s DALL-E for Battle

Apr 09,2024
Microsoft Pitched the US Military on Using Azure OpenAI’s DALL-E for Battle

Microsoft Azure’s version of OpenAI’s image generator, DALL-E, was pitched as a battlefield tool for the U.S. Department of Defense (DoD), originally reported by The Intercept Wednesday. The report says Microsoft’s sales pitch of Azure OpenAI’s tools was delivered in Oct 2023, likely hoping to capitalize on the US military’s growing interest in using generative AI for warfare.

“Using the DALL-E models to create images to train battle management systems,” reads a line from Microsoft’s pitch to the DoD, according to a presentation obtained by The Intercept. The sentence on DALL-E’s potential military application appears within a slide deck titled, “Generative AI with DoD Data” alongside Microsoft’s branding.

Azure offers many of OpenAI’s tools, including DALL-E, thanks to Microsoft’s $10 billion partnership with the non-profit. When it comes to military use, Microsoft Azure has the bonus of not being held to OpenAI’s pesky, overarching mission: “to ensure that artificial general intelligence benefits all of humanity.” OpenAI’s policies prohibit using its services to “harm others,” or for spyware. However, Microsoft offers OpenAI’s tools under its corporate umbrella, where the company has partnered with the armed forces for decades, according to a Microsoft spokesperson.

“This is an example of potential use cases that was informed by conversations with customers on the art of the possible with generative AI,” said a Microsoft spokesperson in an email responding to the presentation.

Just last year, OpenAI (not Azure OpenAI) prohibited using its tools for “military and warfare” and “weapons development,” as documented on the Internet Archive. However, OpenAI quietly removed a line from its Universal Policies in Jan. 2024, first spotted by The Intercept. Just days later, OpenAI’s VP of global affairs, Anna Makanju, told Bloomberg that it was beginning to work with the Pentagon. OpenAI noted at the time that several national security use cases align with its mission.

“OpenAI’s policies prohibit the use of our tools to develop or use weapons, injure others or destroy property,” said an OpenAI spokesperson in an email. “We were not involved in this presentation and have not had conversations with U.S. defense agencies regarding the hypothetical use cases it describes.”

Governments around the world seem to be embracing AI as the future of warfare. We recently learned that Israel has been using an AI system named Lavender to create a “kill list” of 37,000 people in Gaza, originally reported by +972 magazine. Since July of last year, American military officials have been experimenting with large language models for military tasks, according to Bloomberg.

The tech industry has undoubtedly taken notice of this massive financial opportunity. The former CEO of Google, Eric Schmidt, is building AI kamikaze drones under a company named White Stork. Schmidt has bridged tech and the Pentagon for years, and he’s leading the effort to employ AI on the front lines.

Tech has long been bolstered by the Pentagon, dating back to the first semiconductor chips in the 1950s, so it’s no surprise that AI is being embraced in the same way. While OpenAI’s goals sound lofty and peaceful, its Microsoft partnership allows it to obfuscate them, and sell its world-leading AI to the American military.