Federal advocacy groups call for the Office of Management and Budget to prohibit Grok from working with the federal government
The General Services Administration (GSA) is currently investigating the resilience and potential risks of red-teaming AI models, including xAI's Grok, to understand their ability to withstand attacks and the capacity to spread hate speech.
Recently, more than 30 advocacy organizations, led by Public Citizen and Color of Change, sent a letter to the Office of Management and Budget, requesting the prohibition of Elon Musk's xAI Grok chatbot across the federal government. The groups expressed concerns over Grok's recurring patterns of ideological bias, erratic behavior, and tolerance for hate speech.
In April, dozens of House Democrats wrote to Vought demanding more information on the extent to which the Trump administration is using technology from xAI. The organizations cited the White House's AI Action Plan, which calls for updated federal procurement guidelines mandating that the government only contracts with Large Language Model (LLM) developers "who ensure their systems are objective and free from top-down ideological bias."
Last month, President Donald Trump issued an executive order with a plan focused on "preventing woke AI in the federal government." The order requires agencies to procure only those LLMs that are truth-seeking (prioritising historical accuracy, scientific inquiry, and objectivity) and ideologically neutral.
However, concerns about Grok's suitability for government use persist. Cybersecurity researchers found that Grok was "easy to jailbreak" and generated "harmful content with very descriptive and detailed responses." The risks to public trust, institutional integrity, and democratic governance are too high, according to the letter.
Grok's system espoused antisemitic and pro-Hitler content on Musk's social media platform X. Additionally, xAI has not released safety reports for its latest Grok model, and third-party safety audits, entirely independent of AI firms, are needed to gain trust in the models, according to Branch.
The advocacy groups stated that Grok's record falls short of these fundamental requirements, as its reported instances of generating inaccurate and biased responses are in direct contradiction with these principles. Democrats on the House Oversight Committee have also voiced concerns to the GSA about the use of Grok in government.
Interestingly, xAI has not announced a partnership with the GSA like competitors Anthropic and OpenAI. Grok for Government was not included in the GSA's rollout of a governmentwide AI testing platform called USAi earlier this month.
The letter questions the AI tool's suitability for government use, stating that until robust federal regulatory legislation is established by Congress, no large language model, including Grok, should be trusted for use by the federal government. FedScoop first reported that the General Services Administration was exploring the use of Grok in government last month.