Aug 7, 2025
Stephen DeAngelis
Back in 2023, a group of U.S. Senators met with CEOs from some of the world’s largest artificial intelligence (AI) companies and other civic leaders including Elon Musk, Sundar Pichai, Mark Zuckerberg, Bill Gates, and Sam Altman. During that meeting, Altman told the senators, “We need government to lead, and we look forward to partnering with you.”[1] After the meeting, Musk insisted that AI “is potentially harmful to all humans everywhere,” and insisted the federal government needs a department of AI to oversee its development. The concerns voiced about AI in 2023 remain valid; however, power struggles over the future of AI remain.
Shortly after the meeting between senators and AI moguls, Bruce Schneier, a security technologist, and Nathan Sanders, a data scientist affiliated with the Berkman Klein Center at Harvard University, discussed the nature of this power struggle.[2] They explained:
“There is no shortage of researchers and industry titans willing to warn us about the potential destructive power of artificial intelligence. Reading the headlines, one would hope that the rapid gains in A.I. technology have also brought forth a unifying realization of the risks — and the steps we need to take to mitigate them. The reality, unfortunately, is quite different. Beneath almost all of the testimony, the manifestoes, the blog posts, and the public declarations issued about A.I. are battles among deeply divided factions. Some are concerned about far-future risks that sound like science fiction. Some are genuinely alarmed by the practical problems that chatbots and deepfake video generators are creating right now. Some are motivated by potential business revenue, others by national security concerns.”
Schneier and Gold warned, “If lawmakers and the public fail to recognize the subtext of their arguments, they risk missing the real consequences of our possible regulatory and cultural paths forward.” Nevertheless, over the past few years, there has been a growing body of computer scientists warning that guardrails should be installed to prevent AI from doing irreparable harm to humanity.
AI Guardrails
In late 2023, “Representatives from 28 countries and regions, including the USA, European Union, and China, came together to sign the Bletchley Declaration, which emphasizes the urgent need to collaboratively manage the potential opportunities and risks associated with frontier AI.”[3] According to journalist Dev Kundaliya, “The Bletchley Declaration acknowledges that substantial risks could arise from the misuse or unintentional control issues associated with the technology, particularly in the fields of cybersecurity, biotechnology and disinformation. The signatories expressed their concern over the potential for ‘serious, even catastrophic, harm, whether deliberate or unintentional, stemming from the most significant capabilities of these AI models.’ The declaration also recognizes the broader risks beyond frontier AI, such as bias and privacy concerns. It underscores the need for international cooperation to address these risks effectively.”[4] Kundaliya records that “frontier AI” refers to “highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today's most advanced models” (aka artificial general intelligence (AGI) systems).
International regulatory cooperation would certainly be the best way to ensure legitimate AGI initiatives comply with global norms. International regulations would also help global AI companies simplify their compliance efforts. Unfortunately, we currently live a world more willing to compete than cooperate. Even if cooperation were the norm, technology has a way of running ahead of regulation. Journalist Isabelle Bousquette explains, “Companies are pressing ahead with building and deploying artificial intelligence applications, even as the regulatory landscape remains in flux.”[5] She also notes, “The fact that AI regulations could look different nation to nation, or even state to state, creates additional complexities for how companies operate their AI tools.”
To avoid complexities created by “state to state” regulations, the U.S. Congress precluded states from enforcing state-enacted AI regulations when they passed the “One Big Beautiful Bill.” The law states, “No State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this Act.”[6]
AI Growth
The Trump administration generally views the world as a grand competition, which means there must be both winners and losers. This perspective was clearly displayed in the administration’s “action plan” for AI released in July. The introduction to the action plan states, “The United States is in a race to achieve global dominance in artificial intelligence. Whoever has the largest AI ecosystem will set global AI standards and reap broad economic and military benefits. Just like we won the space race, it is imperative that the United States and its allies win this race.” The TechCrunch editorial team observed, “The plan trades guardrails for growth. It downplays AI risks and environmental safeguards, prioritizing deregulation and infrastructure, even on federal lands. Critics say this could harm vulnerable communities and concentrate pollution."[7]
On the other hand, Hodan Omaar, a senior policy manager at the Center for Data Innovation, insists, “The Trump administration’s AI Action Plan is a decisive set of solid policies that puts the United States front and center on the global stage, stepping up to sustain U.S. technological innovation, respond to rising geopolitical competition, and translate AI progress into tangible economic and societal gains at home and abroad.”[8] The Editorial Board of the Wall Street Journal sees the administration’s AI action plan in much the same light. They write, “Credit the Trump team for trying to keep America’s latest golden goose alive. … The goal is to liberate innovators from burdensome regulation, bulldoze impediments to new data centers, and ‘hyper-scale’ private investment, to borrow an industry buzzword. Competition and investment are already enormous.”[9]
Freelance writer Jackie Snow observes, “The plan's more than 90 federal policy actions fall into three distinct categories of feasibility: immediate wins like expanding AI contracts within the Pentagon and loosening export controls that leverage existing government authority; challenging infrastructure goals requiring massive private investment in data centers and grid modernization that could take years; and potentially unworkable provisions like rooting out ‘ideological bias’ in AI systems, where officials have yet to define clear metrics for measuring the very ‘wokeness’ they aim to eliminate.”[10] On the whole, however, reaction to the Trump administration’s action plan seems positive. The Editorial Board at the Washington Post concludes, “Overall, the president’s plan represents an excellent start. But it is also only a start. There is still a very long race to run, and until it is won, the whole team needs to lock in.”
Concluding Thoughts
Growth versus guardrails needn’t be mutually exclusive goals. Innovation and caution can live together. Getting everyone to agree on compromise policies, however, won’t be easy. Schneier and Sanders observe that some guardrails, for example, could do more harm than good. They explain, “Reasonable sounding on their face, these ideas can become dangerous if stretched to their logical extremes. A dogmatic long-termer would willingly sacrifice the well-being of people today to stave off a prophesied extinction event like A.I. enslavement.” On the other hand, they note, “The very companies driving the A.I. revolution have, at times, been eliminating safeguards.”
I agree with Omaar, who stated, “The need for the United States to assert its policy leadership in AI is sorely needed. The global technological landscape has shifted, and China is quickly closing the innovation gap, backed by a coordinated national strategy and heavy investment in infrastructure, talent, and deployment.” I also agree with Schneier and Sanders who conclude, “Ultimately, we need to make sure the network of laws and regulations that govern our collective behavior is knit more strongly, with fewer gaps and greater ability to hold the powerful accountable, particularly in those areas most sensitive to our democracy and environment.”
Finally, the Wall Street Journal’s Editorial Board observes, “The Administration’s AI ambitions run headlong into its restrictionist immigration policies. The U.S. will need many more foreign workers to train AI models and build data centers. Restricting human capital won’t help the U.S. dominate AI.” America needs great minds, scientific research, and a young and vigorous workforce to remain an economic leader and innovation powerhouse.
Footnotes
[1] Maria Curi and Ashley Gold, “Musk, other tech giants agree legislation needed to regulate AI,” Axios, 13 September 2023.
[2] Bruce Schneier and Nathan Sanders, “The A.I. Wars Have Three Factions, and They All Crave Power,” The New York Times, 28 September 2023.
[3] Dev Kundaliya, “Bletchley Declaration: Nations sign agreement for safe and responsible AI advancement,” Computing, 2 November 2023.
[4] Ibid.
[5] Isabelle Bousquette, “AI Is Moving Faster Than Attempts to Regulate It. Here’s How Companies Are Coping.” The Wall Street Journal, 27 March 2024.
[6] Joy Dasgupta, “AI’s Moving at Warp Speed. Who’s Got the Brakes?” SupplyChainBrain, 26 June 2025.
[7] Rob Spiegel, “Trump Unveils Sweeping AI Deregulation Plan to Counter China,” DesignNews, 25 July 2025.
[8] Hodan Omaar, “The AI Action Plan Puts the US Back at the Helm of Global AI Leadership,” Center for Data Innovation, 25 July 2025.
[9] Editorial Board, “Liberation Day for American AI,” The Wall Street Journal, 23 July 2025.
[10] Jackie Snow, “Trump's AI agenda includes the doable — and the impossible,” Quartz, 24 July 2025.
[11] Editorial Board, “Trump is off to a good start with an AI action plan,” The Washington Post, 27 July 2025.