The Next Evolution of Intelligence
Seeing “humans” in machines
At a recent conference, one of the more entertaining speakers remarked “I always put a ‘please’ on the end of my prompts, because in a few years, when these machines rise-up, they’ll see me as one of the good guys”. I recognised the sentiment. Not because I see some Terminator future just round the corner, but when you’re working with these models it’s almost impossible not to anthropomorphise. We have a long-standing tendency to anthropomorphise non-human objects, see a teddy bear with personality, name a car, treat a soft toy like a friend. The study “Anthropomorphism and object attachment” (Wan et al, 2021) found that attributing human‐like properties to non‐human objects influences emotional and cognitive bonds. So, it’s little wonder when we look at an LLM that speaks like a human and reasons like a human, we are predisposed to treat it as a human equivalent.
Just “Stochastic Parrots”
There are however plenty of dissenting voices, both in science and philosophy, that vociferously defend our own intellect as unique, and unchallenged. The term “stochastic parrot” (coined by Emily M. Bender, et al., in their paper “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?”) captures the critique. The argument is LLMs are simply statistical models that repeat back to us our own words and sentences, without “understanding” anything. In much the same way, a student with an extraordinary memory but limited mathematical prowess might learn answers verbatim for a maths exam, without understanding any of the mathematical concepts behind the question.
History has some useful parallels when it comes to our own insistence that we are somehow special or unique in the universe. When Charles Darwin proposed evolution by natural selection, many saw this as a psychological affront. We had long seen ourselves as exceptional, rational, moral, made “in the image of God.” Darwin’s work implied that those distinctions were not categorical but evolutionary, differences of degree, not of kind. When their reasoned objections began to crumble, nay-sayers resorted to authority and metaphysics: the Bible says man was created separately or there must be something immaterial, a soul, that separates us.
There are some striking similarities today. You don’t have to look back too far to find papers or blogs asserting “LLMs can’t sustain a natural conversation”, or “LLMs can’t reason”. Clearly as models and context windows have grown, both these statements are now patently untrue. What we’re left with is the same shift from empirical objections to an appeal to human exceptionalism, “They can’t really understand,” “They’ll never be conscious,” “There’s something ineffable about human thought.”
How LLMs "Understand"
It’s increasingly hard to argue that modern LLMs don’t genuinely “understand” the concepts they discuss. Ask one to “explain thermodynamics and why it matters,” and then try to assert you possess a deeper understanding. It’s not an easy case to make. More importantly, examining how LLMs encode and organise knowledge only reinforces the sense that some authentic form of “understanding” is emerging within these systems.
Large language models don’t store facts like a database, they encode knowledge as geometry in a vast, multi-dimensional space. Each word or token becomes a point defined by how it’s used across billions of sentences. So, related concepts like dog, cat, and hamster, cluster closely together. Meaning emerges from these distances. Relationships like Paris-France, or Rome-Italy appear naturally in the model’s internal landscape. As the model’s layers deepen, it learns ever more abstract patterns, from simple word associations to grammar, logic, and conceptual reasoning. Larger context windows let it connect ideas across ever longer stretches of text, like working memory. The result is a system that doesn’t just recall information, but navigates a rich map of relationships, a form of understanding that grows in complexity much like our own.
We are still special (for now)
However, despite all these parallels, there remain profound differences between human and machine intelligence. LLMs have no conscious experience, no inner voice, self-awareness, or sense of “being.” They lack long-term memory that evolves with experience, the kind that shapes identity and emotion over time. They don’t possess curiosity, that restless drive to seek novelty or ask “why” unprompted. And they don’t sense the world, there’s no smell of rain, no warmth of sunlight, no body anchoring perception to reality. Their understanding is structural, not experiential. In that sense, the boundary between us and them lies not in the ability to reason or understand, but in the capacity to feel existence, to live within the world, not just model it.
It’s not difficult to imagine how we can extend current models with curiosity, long-term memory, and the ability to sense, experience, the world around them. Indeed, there are many projects underway that are already working towards these goals.
Consciousness, though, is where things become elusive. It’s one thing to explain how intelligence works, but quite another to explain why it feels like something to be intelligent. Philosophers call this the “hard problem of consciousness”, why physical or computational processes should produce subjective experience at all. Closely related is the “problem of other minds”. I know I’m conscious, and I assume you are too, but I can’t prove it. The same will be true for AI. Even if a machine appears self-aware, we may never know whether it truly experiences anything, or whether it’s simply modelling what experience looks like.
A new age of intelligence
We stand at a genuine inflection point. Just as Darwin’s ideas reshaped how we saw our place in nature, the rise of intelligent machines is forcing us to rethink our place in the landscape of thought itself. What’s coming isn’t just another technological revolution, it’s a shift in perspective as profound as any in human history. The systems we’re building won’t simply change how we work, learn, and create. They will change how we see ourselves, what it means to reason, to understand, and perhaps even to be conscious. The boundary between human and machine intelligence is no longer a line, but a gradient. And as we move along it, we may discover that the story of intelligence was never solely ours to begin with.