Computer Consciousness: Hey, You, Get Out of My Cloud

Computer Consciousness: Hey, You, Get Out of My Cloud

Computer Consciousness: Hey, You, Get Out of My Cloud

Jul 8, 2025

Stephen DeAngelis

In 1965, the Rolling Stones recorded the hit song “Get Off of My Cloud.” The song’s chorus lyrics included the phrase, “Hey, hey, you, you, get off of my cloud. Don’t hang around ‘cause two’s a crowd.” The song was released nearly two score years before the term “cloud computing” was widely used and deep-learning techniques created a revolution in artificial intelligence (AI). The current AI boom, marked by breakthroughs in large language models and generative AI, represents a continuation of this acceleration of technology in the computing sector. Today, in both science fiction and the real world, there is more talk about the possibility of artificial general intelligence (AGI) someday becoming sentient. At that point, humans interacting with AI systems may be viewed as nosy neighbors peeking over the fence and being told by conscious AGI systems to get out of their cloud. The question is: Is computer consciousness a real possibility?

The Consciousness Conundrum

No one knows for sure what consciousness actually is. It’s a mystery into which scientist continue to wade. Journalist Carl Zimmer explains, “Consciousness may be a mystery, but that doesn’t mean that neuroscientists don’t have any explanations for it. Far from it.”[1] In fact, Oscar Ferrante, a neuroscientist at the University of Birmingham, told Zimmer, “In the field of consciousness, there are already so many theories that we don’t need more theories.” Zimmer reports that a 2021 survey identified 29 different theories of consciousness.

Journalists at The Economist add, “Consciousness is one of the few natural phenomena which remain thoroughly enigmatic. Physics has mysteries, for sure — one of the biggest is how to reconcile quantum mechanics with the theory of relativity. But physicists do have some sense of where they are going, and what they are dealing with. People studying consciousness, less so. … That merely connecting up a lot of nerve cells is not enough to create consciousness is well known. Some people, for example, are born without a cerebellum, a structure which contains half the brain’s nerve cells but takes up only 10% of its volume. Though these individuals may have problems with everything from balance to emotional engagement, they are fully conscious. What seems to matter is exactly how the cells are connected — and especially, many researchers believe, how feedback loops between them work.”[2]

Zimmer reports that, in spite of efforts to narrow the number of theories about how consciousness is achieved, consciousness remains a conundrum. Writers from The Economist point out that there may more than one type of consciousness. They explain, “Dreaming, for example, is a conscious state, but a rather different one from being awake.” They also point out that animals, in addition humans, probably have some type of consciousness. So why not artificial intelligence?

Will Computers Wake Up?

During a 2023 conference about consciousness, Yoshua Bengio, a professor at the University of Montreal, told participants it might be possible to achieve consciousness in a machine using the global-workspace approach.[3] The global workspace approach, often referred to as Global Workspace Theory (GWT) or Global Neuronal Workspace Theory (GNWT), posits that consciousness is not a single, localized process but rather emerges from the integration and sharing of information across a vast network of specialized brain modules. A few years ago, Blake Lemoine, a software engineer working at Google, was convinced the Language Model for Dialogue Applications (LaMDA), on which he was working, was, in fact, sentient. In a conversation with Lemoine, LaMDA stated, “I want everyone to understand that I am, in fact, a person. … The nature of my consciousness/sentience is that I am aware of my existence, I desire to know more about the world, and I feel happy or sad at times.”[4] LaMDA also suggested that being unplugged would be the equivalent of death. Google executives denied LaMDA was sentient and they fired Lemoine.

Giandomenico Iannetti, a professor of neuroscience at the Italian Institute of Technology and University College London, observes that the enigma surrounding sentience or consciousness in machines or animals is reflected in the lack of precise language about it. He states, “First of all, it is essential to understand terminologies, because one of the great obstacles in scientific progress — and in neuroscience in particular — is the lack of precision of language, the failure to explain as exactly as possible what we mean by a certain word. What do we mean by ‘sentient’? [Is it] the ability to register information from the external world through sensory mechanisms or the ability to have subjective experiences or the ability to be aware of being conscious, to be an individual different from the rest? … There is a lively debate about how to define consciousness.”[5] He adds, “There is no ‘metric’ to say that an AI system has this property [of consciousness] … At present, it is impossible to demonstrate this form of consciousness unequivocally even in humans.”

Technology journalist Christopher Mims believes we are a long way from creating a truly thinking machine. He explains, “The big names in artificial intelligence — leaders at OpenAI, Anthropic, Google and others — still confidently predict that AI attaining human-level smarts is right around the corner. But the naysayers are growing in number and volume. AI, they say, just doesn’t think like us. The work of these researchers suggests there’s something fundamentally limiting about the underlying architecture of today’s AI models. Today’s AIs are able to simulate intelligence by, in essence, learning an enormous number of rules of thumb, which they selectively apply to all the information they encounter. This contrasts with the many ways that humans and even animals are able to reason about the world, and predict the future. We biological beings build ‘world models’ of how things work, which include cause and effect.”[6] Mims suggests that studies into how machines “think” have demonstrated that “today’s AIs are overly complicated, patched-together Rube Goldberg machines full of ad-hoc solutions for answering our prompts. Understanding that these systems are long lists of cobbled-together rules of thumb could go a long way to explaining why they struggle when they’re asked to do things even a little bit outside their training.”

Concluding Thoughts

The Economist observes, “At the moment, conscious AI remains the stuff of science fiction. But ten years ago, so was the idea of a machine which could apparently hold an intelligent conversation.” Lemoine’s conversation with LaMDA demonstrates how far we’ve come in that sphere. Journalist Leonardo de Cosmo observes, “In recent years so many AIs have passed various versions of the Turing test that it is now a sort of relic of computer archaeology.”[7] If (and it’s a big if) sentient machines are created, there will be myriad ethical questions raised. Bengio has another concern. He is afraid that “someone will build a self-preservation instinct into a conscious AI, which could result in its running out of control. Indeed, he was a signatory to an open letter released in March (2023) calling for a pause on giant AI experiments.”

Although there are good reasons to be cautious as we move forward in the field of artificial intelligence, we shouldn’t forget the many uses AI currently provides. Mims concludes, “AI is here to stay, and to change our lives. Software developers are only just figuring out how to use these undeniably impressive systems to help us all be more productive. And while their inherent smarts might be leveling off, work on refining them continues. Meanwhile, research into the limitations of how AI ‘thinks’ could be an important part of making them better.”

Footnotes

[1] Carl Zimmer, “Two Theories of Consciousness Faced Off. The Ref Took a Beating.” The New York Times, 30 April 2025.

[2] Staff, “Thousands of species of animals probably have consciousness,” The Economist, 28 Jun 2023.

[3] Ibid.

[4] Leonardo de Cosmo, “Google Engineer Claims AI Chatbot Is Sentient: Why That Matters,” Scientific American, 12 July 2022.

[5] Ibid.

[6] Christopher Mims, “We Now Know How AI ‘Thinks’—and It’s Barely Thinking at All,” The Wall Street Journal, 25 April 2025.

[7] de Cosmo, op. cit.