
IN A world increasingly driven by artificial intelligence, we find ourselves captivated by machines that emulate what we call ‘intelligence’. But before we marvel at neural nets and language models, we must ask: whose intelligence are we referring to when we use the term? The rise of AI, paradoxically, urges us to reflect not just on technology, but on what intelligence itself truly is.
Historically, René Descartes and Ibn Rushd (Averroes) offer contrasting frameworks for understanding intelligence and consciousness — differences that remain highly relevant in today’s artificial intelligence debates. Descartes (1596–1650), often called the father of modern Western philosophy, grounded consciousness in subjective certainty: ‘Cogito, ergo sum’ (I think, therefore I am). For Descartes, intelligence is fundamentally internal, individual and rooted in the mind’s ability to reason independently of the body dualism that separates mental and material substance.
In contrast, Ibn Rushd (1126–1198), the great Islamic philosopher and commentator on Aristotle, proposed a radically different model. In his theory of the Unity of the Intellect, he argued that true intelligence is not housed in individual minds but rather in a universal agent intellect shared among all rational beings (Ibn Rushd, Long Commentary on the De Anima). Consciousness, for Ibn Rushd, emerges through participation in this shared rational structure, rather than isolated cognition.
These differences speak directly to the modern challenge of defining intelligence. Descartes’ legacy underpins many AI models that treat the mind as an abstract, computational system — isolated and programmable. Ibn Rushd’s vision, by contrast, aligns more closely with emergent, relational and distributed models of cognition, such as Clark and Chalmers’ (1998) ‘Extended Mind’ hypothesis, which posits that cognition can stretch beyond the brain into tools, environments and networks.
As we grapple with the question of artificial consciousness, these two doctrines remind us of a deeper issue: is intelligence a self-contained algorithm, or a shared process embedded in the world? In AI’s era, that distinction could shape how (or whether) machines can ever truly think.
Going further east, the Buddhist philosopher Nagarjuna, in The Questions of King Milinda, challenges the very basis of individual identity through the concept of Śūnyatā (emptiness). Nagarjuna argues that things exist only in relation to other things, not in themselves — intelligence, then, may not be an intrinsic property of individual beings, but something that emerges through relations and dependencies.
This brings us to collective intelligence, an idea far older than machine learning. Ant colonies exhibit astonishing collective behaviour, often described as swarm intelligence. Research shows that ants solve complex problems — such as pathfinding and resource allocation — through decentralised coordination. No individual ant ‘knows’ the solution, but together they act with uncanny precision.
Even more fascinating is the intelligence of fungi, particularly mycorrhizal networks, which interlink trees underground to exchange nutrients, warn of dangers, and coordinate behaviour. These fungal systems, described as the ‘Wood Wide Web’ by scientists like Suzanne Simard and Peter Wohlleben, show that intelligence can exist without neurons, brains or central control.
Social organisms consistently outperform solitary individuals. The complexity sciences show that systems composed of simple agents following local rules can produce outcomes far more advanced than any agent alone. This emergent intelligence is key to understanding how intelligence might be distributed, rather than localised.
This vision is echoed in Andy Clark and David Chalmers’ (1998) ‘Extended Mind Hypothesis’, which argues that the mind does not reside solely in the skull, but extends into tools, environments and other people. Our smartphones, notebooks and even relationships are part of our cognitive process. Reframing this alongside Nagarjuna’s emptiness and Ibn Rushd’s unity of intellect reveals a deeper pattern: intelligence is not in the thing — it’s in the relation.
Forests, with their interspecies fungal networks, are prime examples. Here, cognition exists not in a single organism, but in the entanglement of many. The fungal ‘mind’ spreads across roots and soil, shaping ecological decisions like resource allocation and species survival. Monica Gagliano’s experimental work on plant learning further disrupts the idea that brains are required for intelligence — plants ‘remember,’ adapt, and negotiate. Peter Wohlleben shows how trees ‘train’ their young and respond to distress signals, implying social behaviour far more complex than mere biology.
What all this reveal is that intelligence is becoming more problematized, more tangled than our Cartesian legacy ever imagined. Before we build artificial consciousness, we must ask: what is consciousness for? If organisms without brains — like plants and fungi — exhibit intelligent behaviour, then perhaps consciousness is not a computational achievement, but an ecological necessity.
Understanding intelligence, then, is not a preliminary step — it is the foundational one. Before adapting to artificial intelligence, we must reckon with what intelligence means across life forms, histories and systems. We must ask not how to build it, but why it arises, and what it serves. Only then can we responsibly approach the task of artificial consciousness — not as engineers of minds, but as students of life’s tangled existence.
Muttaki Bin Kamal is a doctoral candidate in anthropology at the University of Texas at San Antonio.