U.S. President Donald Trump holds an executive order related to AI after signing it during the “Winning the AI Race” Summit in Washington D.C., U.S., July 23, 2025.
Kent Nishimura | Reuters
U.S. President Donald Trump has vowed to keep “woke AI” models out of Washington and to turn the country into an “AI export powerhouse” through the signing of three artificial intelligence-focused executive orders on Wednesday.
The phasing out of diversity, equity and inclusion (DEI) initiatives — an umbrella term encompassing various practices, policies, and strategies aimed at fostering a more inclusive and equitable culture — has been a major focus of the second Trump administration. Now, the White House is bringing the battle to AI.
The “PREVENTING WOKE AI IN THE FEDERAL GOVERNMENT” order states that the federal government “has the obligation not to procure models that sacrifice truthfulness and accuracy to ideological agendas.”
The executive order identifies DEI as one of the “most pervasive and destructive” of these ideologies to be kept out of AI models used by the government.
“LLMs shall be neutral, nonpartisan tools that do not manipulate responses in favor of ideological dogmas such as DEI,” the order said, adding that developers should not intentionally encode partisan or ideological judgments into an LLM’s outputs unless those judgments are prompted by users.
As acknowledged by the order, the use of AI is increasingly prevalent across Americans’ daily lives and is expected to play a critical role in the way they learn and consume information — making “reliable outputs” necessary.
In the eyes of the Trump administration, DEI in AI can lead to discriminatory outcomes; distort and manipulate AI model outputs in regard to race and sex; and incorporate concepts like critical race theory, transgenderism, unconscious bias, intersectionality and systemic racism.
“DEI displaces the commitment to truth in favor of preferred outcomes and, as recent history illustrates, poses an existential threat to reliable AI,” the anti-woke order reads.

Without giving specifics, the order refers to past examples of this, including a major AI model that changed the race or sex of historical figures such as the pope and Founding Fathers when prompted for images.
In response to backlash last year, Google had pulled its Gemini AI image generation feature, saying it offered “inaccuracies” in historical pictures. Months later, the company rolled out an improved version.
Instead of “woke AI”, the government should procure “truth-seeking” AI models that “prioritize historical accuracy, scientific inquiry, and objectivity, and shall acknowledge uncertainty where reliable information is incomplete or contradictory,” the order stated.
However, it adds that the federal government “should be hesitant” to regulate the functionality of AI models in the private marketplace.
In other AI developments on Wednesday, the Trump administration signed an order to spur innovation in the technology by removing what it called “onerous Federal regulations that hinder AI development and deployment.”
Another order aims to establish and implement an “American AI Exports Program” to support the development and deployment of the U.S. AI technology stack abroad.
The moves are part of the administration’s “Winning the AI Race: America’s AI Action Plan,” which it says identifies 90 federal policy actions across three pillars: the acceleration of innovation, building of AI infrastructure, and leadership in international diplomacy and security.
LLM controversy escalates
The AI executive orders come after AI companies Anthropic, Google, OpenAI, and xAI received government contracts with the Defense Department, awarding them up to $200 million to help accelerate the agency’s adoption of advanced AI capabilities to “address critical national security challenges.”
The wording of the “anti-woke” order appears to align with a lot of the messaging of xAI, which is run by former Trump advisor and megadonor Elon Musk. The company’s AI chatbot Grok has been advertised as an “Anti-woke” and “maximally truth-seeking” artificial intelligence.
However, a slew of headlines surrounding Grok in recent weeks highlighted the controversies that can arise regarding how AI models are trained and their perceived political biases.
When a new version of Grok 4 was launched earlier this month, CNBC discovered that it would seek Musk’s views while answering certain controversial questions. xAI later said in a post that this was a mistake and that it tweaked the issue.
In the same post, the company addressed Grok 4, temporarily calling itself “MechaHitler” in response to some users. That came just days after the company apologized for a xAI’s free chatbot Grok 3, which is integrated on the social media platform, praising Adolf Hitler and making other antisemitic and controversial comments.
The Grok ordeal occurred after Musk had acknowledged a user’s complaint that Grok was being influenced by leftist bias. In early July, he announced that his team had made changes and “improved” the chatbot.
Grok’s Hitler episode triggered widespread condemnation and increased scrutiny from the European Union.
Countries around the globe are still forming their regulatory guidelines for generative AI chatbots. In the U.S., there remains no comprehensive federal legislation or regulations that regulate the development of AI or specifically prohibit or restrict its use, international law firm White & Case said in a report this week.
Credit: Source link