Skip to content

Defense Department recently grants substantial AI contracts to OpenAI and Google

Defense Department grants contracts worth up to $200 million apiece to Anthropic, Google, OpenAI, and xAI, tasking them with creating agentic AI workflows and solving related issues.

Defense Department grants significant AI agreements to OpenAI and Google
Defense Department grants significant AI agreements to OpenAI and Google

Defense Department recently grants substantial AI contracts to OpenAI and Google

In a significant move, the Department of Defense has awarded contracts worth up to $200 million each to Anthropic, Google, OpenAI, and xAI. These contracts are intended to develop AI workflows across various mission areas and increase the ability of these companies to understand and address critical national security needs.

The selection of these specific companies has raised questions regarding the ideological constitutions and alignment methodologies of some models for governmental use.

Anthropic, for instance, operates its AI model, Claude, based on a "constitution" rather than the reinforcement learning paradigm commonly used by other companies like OpenAI, which employs reinforcement learning from human feedback to align its models, including ChatGPT, with the aim of minimizing "untruthful, toxic, [and] harmful sentiments."

Anthropic's constitution for Claude's models designated for government use possess "looser guardrails", according to The Verge. However, the modified constitutions for these models have not been made public. The company did, however, publicly release Claude's constitutional framework in May 2023, providing the model with "explicit values" that contrast with values determined implicitly through extensive human feedback.

One such principle instructs the AI to "choose the response that is least likely to be viewed as harmful or offensive to those from a less industrialized, rich, or capitalistic nation or culture." The constitution governing Claude also incorporates principles designed to foster "consideration of non-western perspectives."

Google also utilizes reinforcement learning from human feedback for aligning its large language model, Gemini. The specific constitutions of the AI models from Anthropic, Google, and OpenAI outlined in the U.S. defense authority contracts for the development of "agentic AI workflows," publicly released in May 2023, are not detailed in the publicly available sources; no explicit technical or architectural descriptions of these models in the defense contracts have been disclosed as of current information.

Microsoft has banned China-based engineers from working on Department of Defense projects, raising questions about the international collaboration in these AI development efforts. IBM, on the other hand, has stated that a key advantage of reinforcement learning from human feedback is its independence from a "straightforward mathematical or logical formula to define subjective human values."

The AI system deployed within the Defense Department should inherently reflect and prioritize American values, according to the principle of fostering non-western perspectives. The Universal Declaration of Human Rights, which encompasses entitlements such as "social protection", "periodic holidays with pay", "housing and medical care", and "equally accessible" higher education, may serve as a guiding principle in this regard.

The use of AI in national security is a complex and evolving issue, and these contracts mark a significant step forward in the development and deployment of AI workflows for critical mission areas. As these technologies continue to evolve, it will be important to ensure that they are aligned with the values and needs of the diverse communities they serve.

Read also: