How Close is the Singularity?

How Close is the Singularity?

How Close is the Singularity?

Jun 12, 2025

Stephen DeAngelis

The AI singularity (sometimes referred to as artificial general intelligence (AGI)) is defined as the moment when machines become smarter than the humans who created them and sentient. The definition of “singularity” has roots in both mathematics and physical sciences (specifically, cosmology). Both uses are interesting to examine. A mathematical singularity is a point at which a function no longer works in a predictable way. In cosmology, it refers to an event horizon so spectacular or powerful that no useful data is transmitted from it. The most common cosmological examples are the big bang and black holes. The common thread in these definitions of singularity is the impossibility of being able to predict anything useful about them or their consequences. A singularity changes everything.

So, it was with much interest I recently read an article by freelance writer Darren Orf whose headline teased that the singularity might be achieved within the year.[1] That would be big news indeed. Orf writes, “Some researchers who’ve studied the emergence of machine intelligence think that the singularity — the theoretical point where machine surpasses man in intelligence — could occur within decades. On the other end of the prediction spectrum, there’s the CEO of Anthropic, who thinks we’re right on the threshold — give it about 6 more months or so.” The basis for Orf’s article was a new analysis of predictions made by 8.950 scientists, entrepreneurs, and other AI experts conducted by AIMultiple. The analysis concluded, “Current surveys of AI researchers are predicting AGI around 2040. However, just a few years before the rapid advancements in large language models (LLMs), scientists were predicting it around 2060. Entrepreneurs are even more bullish, predicting it around ~2030.”

Is the Singularity Inevitable?

Orf writes, “Many experts believe AGI is inevitable.” And, as noted above, his spectrum of inevitability ranges from 6-months to decades. If, however, computer sentience is an essential part of the singularity, the spectrum should range from now to never. A dozen years ago, Yann LeCun, Vice President and Chief AI Scientist at Meta, stated, “I would be happy in my lifetime to build a machine as intelligent as a rat.”[2] Around the same time futurists like Vernor Vinge and Ray Kurzweil were predicting that a singularity would occur by mid-century. The late Paul G. Allen, co-founder of Microsoft, and computer scientist Mark Greaves were skeptical of those claims. They wrote, “While we suppose this kind of singularity might one day occur, we don’t think it is near. In fact, we think it will be a very long time coming. … An adult brain is a finite thing, so its basic workings can ultimately be known through sustained human effort. But if the singularity is to arrive by 2045, it will take unforeseeable and fundamentally unpredictable breakthroughs, and not because the Law of Accelerating Returns made it the inevitable result of a specific exponential rate of progress.”[3]

Allen and Greaves asserted, “To achieve the singularity, it isn’t enough to just run today’s software faster. We would also need to build smarter and more capable software programs. Creating this kind of advanced software requires a prior scientific understanding of the foundations of human cognition, and we are just scraping the surface of this. This prior need to understand the basic science of cognition is where the ‘singularity is near’ arguments fail to persuade us.” People predicting the inevitability of the singularity don’t believe understanding human cognition is required. They argue that machine cognition could develop differently from human cognition.

AGI and the Future

Reporter Alex Wilkins writes, “It isn’t always clear what AGI really means. Indeed, that is the subject of heated debate in the AI community, with some insisting it is a useful goal and others that it is a meaningless figment that betrays a misunderstanding of the nature of intelligence — and our prospects for replicating it in machines. ‘It’s not really a scientific concept,’ says Melanie Mitchell at the Santa Fe Institute in New Mexico.”[4] Nevertheless, like many other terms, AGI is here to stay and you will be reading a lot more about it in the future. AI expert Alex Goryachev writes, “I have no doubt that Artificial General Intelligence is coming soon, promising to revolutionize industries from healthcare to science and even our understanding of the universe. I'm genuinely excited about the transformative potential it holds. AGI will redefine industries and accelerate innovation at a pace we've never seen before.”[5] He adds, “In the midst of all this progress, I can't shake the thought: What does this mean for my kids, society, and any person in an AGI-driven world? The excitement is undeniable, but the challenges we face are real.”

Kurzweil believes that AGI will launch revolutions in energy, manufacturing and medicine.[6] Concerning the energy revolution, Kurzweil writes, “AI … is already driving innovations in both photovoltaics and batteries. This is poised to accelerate dramatically. ... Once vastly smarter AGI finds fully optimal materials, photovoltaic megaprojects will become viable and solar energy can be so abundant as to be almost free.” As a result, Kurzweil explains, “Energy abundance enables another revolution: in manufacturing. … AI is making big strides in robotics that can greatly reduce labor costs. Robotics will also reduce raw-material extraction costs, and AI is finding ways to replace expensive rare-earth elements with common ones like zirconium, silicon and carbon-based graphene. Together, this means that most kinds of goods will become amazingly cheap and abundant.”

Finally, Kurzweil sees a revolution in medicine. He explains, “Today, scientific progress gives the average American or Briton an extra six to seven weeks of life expectancy each year. When AGI gives us full mastery over cellular biology, these gains will sharply accelerate. ... And thanks to exponential price-performance improvement in computing, AI-driven therapies that are expensive at first will quickly become widely available. This is AI’s most transformative promise: longer, healthier lives unbounded by the scarcity and frailty that have limited humanity since its beginnings.”

Concluding Thoughts

There are certainly reasons to be optimistic about the future of computing. There are also reasons to move ahead with caution. Technology reporter Cade Metz explains, “As these eternally confident voices predict the near future, their speculations are getting ahead of reality. And though their companies are pushing the technology forward at a remarkable rate, an army of more sober voices are quick to dispel any claim that machines will soon match human intellect.”[7] Nick Frosst, a founder of the A.I. start-up Cohere, told Metz, “The technology we’re building today is not sufficient to get there. What we are building now are things that take in words and predict the next most likely word, or they take in pixels and predict the next most likely pixel. That’s very different from what you and I do.” Technology writer Scott Rosenberg agrees. He explains, “Huge hurdles to AGI remain. ... No one knows whether these problems can be fixed by throwing more processing power and better algorithms at them. … Perhaps entirely new architectures and techniques will be required, as some high-profile industry critics believe.”[8]

Pondering when (or if) the singularity might arrive may be an entertaining mental exercise; however, the road to singularity could turn out to be a dead end. The road is more likely to lead to AGI-adjacent. As I wrote half-a-dozen years ago, “Caution is warranted; but, optimism should still rule the day. We simply don't know if we will ever achieve the singularity — and know even less about will happen if we do.”[9]

Footnotes

[1] Darren Orf, “Humanity May Achieve the Singularity Within the Next 6 Months, Scientists Suggest,” Popular Mechanics, 1 June 2025.

[2] Sean Captain, “A Rat is Smarter Than Google,” NBC News, 4 June 2012.

[3] Paul G. Allen and Mark Greaves, “The Singularity Isn't Near,” MIT Technology Review, 12 October 2011.

[4] Alex Wilkins, “What is artificial general intelligence, and is it a useful concept?” New Scientist, 21 May 2024.

[5] Alex Goryachev, “The Artificial General Intelligence Revolution Is Coming — Here's What Every Leader Needs to Consider.” Entrepreneur, 13 February 2025.

[6] Ray Kurzweil, “Ray Kurzweil on how AI will transform the physical world,” The Economist, 17 June 2024.

[7] Cade Metz, “Why We’re Unlikely to Get Artificial General Intelligence Anytime Soon,” The New York Times, 16 May 2025.

[8] Scott Rosenberg, “AI's promised nirvana is always a few years off,” Axios, 20 February 2025.

[9] Stephen DeAngelis, “Artificial General Intelligence and the Singularity,” Enterra Insights, 12 March 2019.