Fears Grow as Artificial General Intelligence Nears

Fears Grow as Artificial General Intelligence Nears

Fears Grow as Artificial General Intelligence Nears

Mar 24, 2025

steve

"Does artificial intelligence pose a major threat or are the dangers overblown?" That's a question posed by the staff at Supply Chain Today. It's also a question on many peoples' minds. The late theoretical physicist Stephen Hawking mused, "One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all." The type of artificial intelligence generating the most fears is artificial general intelligence (AGI). Journalists Michael Calore and Lauren Goode explain, "The idea that machine intelligence will one day take over the world has long been a staple of science fiction. But given the rapid advances in consumer-level artificial intelligence tools, the fear [feels closer now] than it ever has before. The generative AI craze has stirred up excitement and apprehension in equal measure, leaving many people uneasy about where the future of this clearly powerful yet still nascent tech is going."[1]

 

The Fears

 

Whether a sentient artificial intelligence system is ever created remains an unanswerable question; however, creating something approaching artificial general intelligence is likely. Technology journalist Kevin Roose writes, "Here are some things I believe about artificial intelligence: I believe that over the past several years, A.I. systems have started surpassing humans in a number of domains — math, coding and medical diagnosis, just to name a few — and that they’re getting better every day. I believe that very soon — probably in 2026 or 2027, but possibly as soon as this year — one or more A.I. companies will claim they’ve created an artificial general intelligence, or A.G.I., which is usually defined as something like 'a general-purpose A.I. system that can do almost all cognitive tasks a human can do.' I believe that when A.G.I. is announced, there will be debates over definitions and arguments about whether or not it counts as 'real' A.G.I., but that these mostly won’t matter, because the broader point — that we are losing our monopoly on human-level intelligence, and transitioning to a world with very powerful A.I. systems in it — will be true."[2]

 

When AGI, or something approaching it, is created, Yuval Noah Harari, an Israeli historian, believes it will become a dangerous independent decision-maker, not just a tool. In fact, he believes AI is more dangerous than nuclear weapons. He explains, "A tool is something in your hands. A hammer is a tool. An atom bomb is a tool. You decide to start a war and who to bomb. It doesn’t walk over there and decide to detonate itself. AI can do that."[3] Harari notes that autonomous weapons already exist and he worries that new weapons could be developed that humans can no longer control. Harari, of course, is not alone. People from all sorts of disciplines are concerned about the unchecked development of AI. Earlier this year, a group of independent experts warned that "advanced artificial intelligence systems have the potential to create extreme new risks, such as fueling widespread job losses, enabling terrorism, or running amok."[4]

 

In addition to the existential threats to humankind, Tim Bajarin, Chairman at Creative Strategies, Inc., worries about how AGI will have a "profound impact on our humanity." He explains, "The intersection of human and AI interaction is not merely a frontier of technological innovation but a pivotal juncture redefining our relationships and productivity paradigms. As AI integrates deeper into our personal and professional lives, its influence extends beyond efficiency gains, encroaching upon the essence of human experience. This integration challenges traditional notions of creativity, empathy, and interpersonal connections, areas that were once believed to be exclusively human domains. The shift in the human-AI paradigm makes us rely more on these AI tools to do almost everything for us."[5] It would be unwise to simply ignore widespread concerns about AGI — or systems approaching that level of AI. On the other hand, it would be unwise to ignore the potential benefits of such systems as well. As Yoshua Bengio, a prominent AI scientist, observes, "Nobody has a crystal ball. Some scenarios are very beneficial. Some are terrifying. I think it’s really important for policymakers and the public to take stock of that uncertainty.”[6]

 

The Promise

 

This article started with a question and Pippa Malmgren, Founder and CEO of the Geopolitica Institute, asks another important question, "How can we make sense of the world if the volume of data exceeds our ability to process it? Humanity is drowning in an ever-widening galaxy of data."[8] A large part of the answer to that question is artificial intelligence. Malmgren observes, however, "AI isn’t just about processing more information faster: it is about reducing uncertainty. Its rise will also profoundly change how humans think about the nature of reality. ... Will the volume of data overload mean that human minds can’t make sense of things that machines can understand perfectly? Will we eventually outsource decision-making to AI powered by neuromorphic chips that form a brain that is vastly better informed and more conscious and conscientious than any one human brain? This means letting go of the details and getting into the flow of this new emergent superhuman consciousness that moves faster than our minds."

 

The promise of AGI-like systems is that they become partners with humanity, not humanity's overlords. Stanford professor Erik Brynjolfsson insists, “Racing with the machine beats racing against the machine. Technology is not destiny. We shape our destiny.”[8] Eric Boyd, corporate vice president of Microsoft AI Platforms, agrees. He explains, "The potential for this technology to really drive human productivity… to bring economic growth across the globe, is just so powerful, that we'd be foolish to set that aside."[9]

 

Arvind Narayanan, a professor of computer science at Princeton University, and Sayash Kapoor, a doctoral candidate at the university, believe the good guys will be working as hard to ensure AI doesn't result in bad consequences as those who would use it for nefarious purposes. They write, "If anything, apocalyptic thinking will worsen risk by misguiding policy makers about what interventions are actually needed. Putting AI back in a bottle will do nothing about the real problem. As for the possibility of sheer accident? We will need to improve our early-warning systems and our ability to quickly shut down misbehaving AI. But the engineering community has already done this successfully with many other dangerous systems, and can do it with AI. As elsewhere, the defenders won’t stand still."[10]

 

Concluding Thoughts

 

Roose concludes, "I believe that hardened A.I. skeptics — who insist that the progress is all smoke and mirrors, and who dismiss A.G.I. as a delusional fantasy — not only are wrong on the merits, but are giving people a false sense of security." In other words, we shouldn't dismiss the reality that something like AGI will be created — probably sooner than later. What we do with that capability is another matter. Malmgren adds, "This is more than a Renaissance. It is something new — and we are present at the creation. It’s a bittersweet moment: but change is happening, like it or not. ... Only by improving the quality of our emerging consciousness, and the conscious qualities of our machines, can we hope to stay afloat in that ever-swelling ocean of knowledge." I agree with Professor Brynjolfsson that we are better off learning to race with the machine than trying to race against it.

 

Footnotes [1] Michael Calore and Lauren Goode, "AI Won’t Wipe Out Humanity (Yet)," Wired, 1 June 2023. [2] Kevin Roose, "Powerful A.I. Is Coming. We’re Not Ready." The New York Times, 14 March 2025. [3] Staff, "AI is more dangerous than nuclear weapons, warns Yuval Noah Harari," Economic Times CIO.com, 24 March 2025. [4] Kelvin Chan, "General purpose AI could lead to array of new risks, experts say in report ahead of AI summit," Associated Press, 29 January 2025. [5] Tim Bajarin, "Will Artificial Intelligence Diminish Our Humanity?" Forbes, 20 May 2024.

[6] Chan, op. cit.

[7] Pippa Malmgren, "Will humans survive the rise of the machines?" UnHerd, 20 May 2024. [8] Paul Martin, “How to: Survive the Job Automation Apocalypse,” LinkedIn, 6 August 2015. [9] Tom Clarke, "Artificial intelligence 'doesn't have capability to take over', Microsoft boss says," Sky News, 7 July 2023. [10] Arvind Narayanan and Sayash Kapoor, "Does AI Pose an Existential Risk to Humanity? Two Sides Square Off," The Wall Street Journal, 8 November 2023.