In January 2025, President Donald Trump issued Executive Order (EO) 14179, titled “Removing Barriers to American Leadership in Artificial Intelligence.” This marked a significant change in U.S. AI policy, shifting the focus to eliminating what the administration deemed “ideological bias” and “engineered social agendas” in AI development.

The National Institute of Standards and Technology (NIST) responded by instructing scientists who partner with the U.S. Artificial Intelligence Safety Institute (AISI) to remove all references to “AI safety,” “responsible AI,” and “AI fairness” from their objectives. This move reversed some of the prior administration’s policies.
Biden’s executive order on AI established AISI under the Department of Commerce in 2023 to develop testing, evaluations, and guidelines for what it calls “trustworthy” AI. The NIST directive is part of an updated cooperative research and development agreement for AISI consortium members distributed in early March. The previous version of the NIST agreement had encouraged researchers to identify and address discriminatory behavior in AI models, especially concerning gender, race, age, and economic inequality – biases that can drastically impact end users, disproportionately affecting minorities and the economically disadvantaged.
This shift aligns with the Trump administration’s emphasis on reducing ideological bias in AI models. The goal is to strengthen American economic competitiveness and national security by fostering an environment where AI innovation can thrive without regulatory restrictions. Trump’s EO revoked the AI policies of the Biden administration, which had emphasized AI safety, fairness, and the mitigation of discriminatory behaviors.
The new EO ordered a comprehensive review of all existing AI policies to identify and remove those seen as obstacles to innovation. It also set a 180-day timeline for developing a strategic plan to ensure U.S. leadership in AI, with oversight from key White House officials, including the newly appointed Special Advisor for AI and Crypto, David Sacks.
Biometric Update previously reported that Trump’s selection of Sacks as “AI Czar” signaled the administration’s intent to move towards reduced regulation of AI. Sacks’ selection has been met with mixed reactions from the tech community and policymakers. Critics have questioned his preference for limited oversight, potential industry bias, and conflicts of interest tied to his private sector activities.
Critics also point out his “special government employee” status means he is exempt from the standard confirmation process and full financial disclosure required of Senate-confirmed officials. They argue this lack of transparency risks undermining public trust and could allow him to advance policies that align with his professional interests without adequate accountability.
Trump’s EO sparked a strong reaction within the tech community. While some praised the move as a necessary step toward preventing politically motivated constraints on AI research, others have criticized it as a dangerous abandonment of ethical and safety considerations.
Yann LeCun, Meta’s chief AI scientist, condemned the policy as a “witch hunt in academia.” He compared the administration’s actions to the Red Scare of the Cold War era, warning they could drive American scientists to pursue research opportunities abroad. LeCun and other industry leaders contend an excessive focus on removing perceived ideological bias could inadvertently lead to a deregulated AI landscape where discriminatory or unsafe AI systems proliferate.
The implications of Trump’s EO are far-reaching. On one hand, the White House’s approach could accelerate AI development in the U.S. by reducing regulatory hurdles, potentially giving American companies a competitive edge. On the other hand, by deprioritizing safety and ethical guidelines, there is a substantial risk that AI systems will become more prone to discriminatory outcomes or unintended consequences.
The policy shift also puts the U.S. on a divergent path from other global powers, particularly the European Union, which is implementing strict AI regulations emphasizing transparency, accountability, and fairness. This regulatory mismatch could pose challenges for American companies operating internationally.
Domestically, the federal government’s reduced oversight of AI regulation has already spurred individual states to implement their own AI laws, creating a complex regulatory landscape. Businesses now face challenges navigating inconsistent rules across different jurisdictions, which will complicate AI development and deployment.
In addition, the administration’s stance on AI policy will likely hinder international efforts to establish common safety and ethical standards, potentially reducing the U.S.’s influence in shaping the future of AI governance globally.
Potential funding cuts at AISI have intensified concerns within the technology sector, where many fear efforts to develop responsible AI could be jeopardized by Trump’s push to downsize the federal government. According to reports, NIST is preparing layoffs, including staff in its CHIPS for America program. These potential job losses have intensified suspicions that AISI could ultimately face closure under Trump’s administration.
“It feels almost like a Trojan horse. Like, the exterior of the horse is beautiful. It’s big and this message that we want the United States to be the leaders in AI, but the actual actions, the [goal] within, is the dismantling of federal responsibility and federal funding to support that mission,” Jason Corso, a professor at the University of Michigan, told The Hill.
The future of NIST remains uncertain. AISI, meanwhile, lost its director earlier this month, and its staff were excluded from an AI summit recently held in Paris. While Trump’s executive order aims to position the U.S. as the dominant force in AI by prioritizing innovation over regulatory constraints, this raises significant concerns about the long-term implications of such an approach.
The balance between technological advancement and ethical responsibility remains a critical debate, with critics warning that the absence of guardrails could lead to AI systems that reinforce biases, lack accountability, and create unforeseen risks. Whether Trump’s AI policies achieve their intended goals is yet to be seen, and has set the stage for ongoing controversy and debate in the AI community.