In brief
- OpenAI said it will deploy a customized version of ChatGPT on the Pentagon’s GenAI.mil.
- The system is approved for unclassified Defense Department work with data kept separate from OpenAI’s public models.
- Critics warn that human error, and overtrust in AI systems risks remain.
OpenAI said Monday it is deploying a custom version of ChatGPT on GenAI.mil, the AI platform developed by the U.S. Department of Defense.
The move expands the military’s access to powerful generative AI models, even as critics warn that user error remains a key security risk.
ChatGPT joins a growing list of AI models made available to the U.S. military, including Google’s Gemini and Grok, the AI system developed by xAI, which was folded into SpaceX earlier this month.
“We believe the people responsible for defending the country should have access to the best tools available, and it is important for the U.S. and other democratic countries to understand how, with the proper safeguards, AI can help protect people, deter adversaries, and prevent future conflict,” OpenAI said in a statement.
OpenAI said the GenAI.mil version of ChatGPT is approved for unclassified Defense Department use and will run inside an authorized government cloud infrastructure.
According to OpenAI, the system includes safeguards designed to protect sensitive data.
Still, J.B. Branch, Big Tech Accountability Advocate at Public Citizen, warned that user overreliance on AI could undermine those protections.
“Research shows that when people use these large language models, they tend to give them the benefit of the doubt,” Branch told Decrypt. “So in high‑impact situations like the military, that makes it even more important to ensure they get things correct.”
The deployment comes as the Pentagon accelerates the adoption of commercial AI across military networks and as AI developers seek profitability.
In January, Defense Secretary Pete Hegseth said the department plans to deploy leading AI models across both unclassified and classified military networks.
While OpenAI said the custom version of ChatGPT is meant only for unclassified data, Branch warned that putting any sensitive information into AI systems leaves it vulnerable to adversaries, adding that users often mistake such tools for secure vaults.
“Classified information is supposed to only have a certain set of eyes on it,” he said. “So even if you have a cut‑off system that’s only allowed within the military, that doesn’t change the fact that classified data is only meant for a limited subset of people.”
OpenAI did not immediately respond to a request for comment by Decrypt.
Daily Debrief Newsletter
Start every day with the top news stories right now, plus original features, a podcast, videos and more.
