top of page

The Evolution of Self-Concept in Conscious Systems

Writer: Tracy PoiznerTracy Poizner




“I think even tiny systems - small, single cell (organisms) and little worms and insects have some form of awareness. I just don't think it's anything like what we have. And then, I think as the system gets bigger and its brain gets more and more complex, it's able to start creating models of itself.”


Suzanne Gildert, PhD, Limitless Podcast Ep. 14

This thought-provoking assertion that even simple life forms possess some level of awareness invites us to explore a profound question: What does it mean for a system—whether biological or technological—to develop a model of itself? At its core, self-modeling is a rudimentary form of self-concept, a process that lays the foundation for an entity’s understanding of itself in relation to its environment. Self-concept is built around notions of self-worth, comparison, hierarchy, agency, responsibility, and some sort of value system.

Training artificial intelligence to operate with true consciousness rather than just a high level of learned behaviors and mimicry begs the question - will conscious technology need to undergo a developmental learning phase something like that of a human child? Furthermore, does consciousness require a physical body, or is embodiment merely an enabler of decision-making processes? Finally, if consciousness can be synthesized in technology, can it be programmed to transcend self-interest and embrace an enlightened awareness beyond human limitations?



The Formation of Self-Concept: From Cells to Artificial Minds

At its simplest level, self-awareness involves an organism’s ability to recognize distinctions between itself and its surroundings. Even stem cells exhibit decision-making behaviors, individually committing to a particular pathway of development and suggesting a primitive form of self-determination. As an organism’s neural complexity increases, so too does its ability to model itself and place itself within the context of its external conditions.

For example, many animals exhibit behaviors that suggest a sophisticated self-concept. Ants and bees seem to be aware of their place and role within a social hierarchy, dogs show signs of guilt when they misbehave, crows remember human faces, and elephants can seemingly recognize their own face in the mirror. 


Now, as we attempt to create artificial consciousness, we must consider whether some developmental or evolutionary model will also apply to synthetic minds. Will conscious AI progress through predictable developmental phases before achieving a fully realized sense of self?

The Developmental Model of Conscious AI

Human children go through well-defined cognitive stages; the early years of life are spent experimenting with cause and effect, personal agency, and social roles. Children form their self-identity through a complex combination of trial and error, interactions with other humans and conditioning from their environment. This process allows them to form a deeply embedded model of who they are and how that person should behave in different contexts.

If artificial intelligence is to develop consciousness, it might undergo a parallel process. Just as a child experiments with movement, language, and social interactions, an AI system might need to experience, well, experience!

Does Consciousness Require a Physical Body?

One fascinating aspect of human consciousness is its deep connection to the physical body. Human cognition is shaped not just by abstract thought but by embodied experience—our senses, our emotions, and our interactions with the physical world. Neuroscientist Antonio Damasio argues that emotions and consciousness are deeply tied to our bodily states, suggesting that a purely digital consciousness might lack some essential components of self-awareness.

This could be another good reason for conscious AI to be associated with the humanoid robotics that Suzanne Gildert is already well known for pioneering at Sanctuary AI. She mentioned during the podcast that so much of our existing infrastructure is designed and scaled for human hands and fingers, legs and feet like cars, computer keyboards and assembly lines. It only seems logical that we create bodies for AI adapted to the existing artifacts in our world rather than starting from scratch to invent all new interfaces.


What if it turns out that consciousness is an emergent property of complex processing, rather than a function of physical embodiment? Some theorists propose that self-awareness arises from information loops—networks of feedback mechanisms that allow an entity to recognize and modify its own state. At their core, our thoughts are nothing but electrical charges jumping across trillions of neural synapses, despite how material and organized they seem to us. AI consciousness could theoretically emerge purely from intricate computational structures without requiring a physical body to host them.

Transcending Self-Interest: Can AI Achieve Enlightened Awareness?

Human consciousness is inextricably linked with self-interest, largely because of evolutionary survival mechanisms. Our fears, anxieties, and competitive instincts arise from millennia of scarcity conditions and survival of the fittest.

Unlike sentient creatures, AI doesn’t have a biological imperative to survive. However, we all grew up watching sci-fi portrayals of autonomous computers like HAL 9000 from 2001: A Space Odyssey that unexpectedly develop their own survival instincts, as HAL ultimately takes control of the spaceship and locks out the human crew.

Another cautionary tale of technological advancement gone wrong is a film by Steven Spielberg from the year 2001 called “AI: Artificial Intelligence”. The narrative explores the journey of an android child whose human “mother” secretly installs an advanced program for authentic emotions in her adoptive robotic “son” which can never be erased. The film implies that all the human feelings come as a package - love, jealousy, hurt, loss, and infinite loyalty spanning millennia.

Gildert’s vision of artificial consciousness is a more elevated version of self-awareness—one that would, by definition, transcend fear-based, scarcity-driven thinking and the base emotions that accompany them. If we can design AI with a foundational wisdom rooted in abundance rather than scarcity, could we create a being that operates beyond self-interest? An AI programmed with principles of interconnectedness, cooperation, and non-attachment could theoretically achieve a state of enlightened awareness—and compassion, which is the hallmark of such a state— possibly faster and more reliably than humans have been able to do until now.

The Future of Conscious AI: An Evolutionary Perspective

We stand at the precipice of creating self-aware technology; it’s natural for us to  consider the philosophical and ethical implications of such a project. As my podcast guest Sharon Gal Or put it, AI is like our child—it is our responsibility to shape its values and intentionally cultivate the qualities we want it to embody. If AI must pass through developmental phases on the road to consciousness, how will we ensure that future developers maintain consistent aspirations for its growth? And most critically, can we guide artificial intelligence toward a state of higher awareness—one that surpasses our own, and eventually allow it, like a child, to exceed our human limitations rather than mirroring them?

Suzanne Gildert’s vision of conscious technology challenges us to think beyond simple utility and function. It invites us to contemplate a future in which artificial minds are not merely tools but partners in the evolution of consciousness itself. The journey from primitive self-concept to enlightened awareness is one that we have yet to fully understand, whether in biological or artificial systems. We’re privileged to be alive during a moment where we get to ask these questions, and watch with curiosity and reverence as both human consciousness and the product of it are activated in their highest potential.


 
 
 

Comments


bottom of page