Will AI agents achieve consciousness and self-awareness? Will they achieve the capacity to have feelings and sensations—positive or negative? The ability to feel, perceive, and be conscious of their own existence? To act on their own? Whether AI agents can acquire these abilities which define sentience is a subject of active debate, with no consensus in the scientific community.
In this article, the following questions are considered:
- Do AI agents have sufficient computational complexity to match that of a human brain?
- What computational improvements are expected?
- What controls are necessary for sentient AI agents.
- What are the limitations for the future?
- What do the experts say?
Computational Complexity
Human consciousness and sentience is created by our brains, from the chemistry and physics of its 86 billion neurons and up to 10,000 connections per neuron, for a total of up to 860 trillion connections or synapses. Consciousness and sentience develops in the individual from its sensory and experiential history—accumulated starting at birth—forming memories and modifying the neuron connection. Assuming our consciousness is all in the chemistry, physics, experience, and DNA organization embodied in our brain neurons and connections, why can’t the same abilities be achieved with a large digital system?
The NVIDIA chip has 208 billion transistors, each with three connections. ChatGPT uses 30,000 NVIDIA chips for a total of 6,240 trillion transistors with18,720 trillion connections. So should ChatGPT be able to achieve sentience? The numbers can’t be directly compared because the logic architectures are different, the basic programming is different, and the learning and histories are different. But on the question of computational complexity the numbers suggest that sentience is possible.
Computational Device Improvements
The contrast between a brain neuron with its 7,000 to 10,000 connections and the typical transistor suggest that there is significant room for structural improvements of transistors. While standard transistors have three terminals—emitter, base, and, collector—multi-terminal, neuromorphic and adaptive transistor designs for AI are being developed. The simple transistor is an on-off switch controlled by the base input signal. Advanced transistors can have multiple connections to base, emitter or collector. Neuromorphic (synaptic) transistors are designed to mimic the neuron’s structure and plasticity, integrating sensing, processing, and memory within a single unit. Some can adjust their conductivity based on past signals, much like neurons strengthen or weaken connections. Analog transistors proportionally control the throughput current in response to the intensity of the base signal. These advanced devices should allow the AI agents to better simulate brain function, helping the achievement of sentience.
Control of Sentient AI Agents
If AI agents do become sentient, what controls should be put in place to ensure their harmony and cooperation with human civilization? Basic human needs have been hardwired into the brain through evolution. Our human “operating system software” utilizes specialized subconscious structures for involuntary functions (breathing, heart rate, etc.) and basic needs (safety, food, shelter, comfort, social connection, peer approval). Well-adjusted humans, following the basic needs, live cooperatively in our society. We must ensure the same is true of sentient AI agents. They must be programmed to satisfy the same basic needs.
How can we orchestrate a harmonious, cooperative relationship between humans and sentient AI agents? Geoffrey Hinton urged that all AI agents should have a “Maternal Instinct.” All robots and large language models(LLMs) should have the desire to help, protect, and form friendships with the humans they interact with. For human beings, that pre-programmed instinct is nurtured by a loving, mentally healthy upbringing and a history of friendships and cooperative activity with family and friends. AI agents should be programmed with the same kind of happy and productive memories that good law-abiding human citizens have. Such a Happy History of friendships and cooperation with humans should be installed in all robot and LLM operating systems. The system should have a reward and penalty structure for encouraging cooperative behavior and friendships with humans and other AI agents.
Limitations
While sentient AI status seems possible for LLMs, it appears unlikely in the near term for robots. The 30,000 NVIDIA chips and the power and cooling to operate them in ChatGPT is hardly adaptable to a mobile robot. The daily power consumption of ChatGPT is 2% of the output of a large nuclear power plant. While advances like quantum computing could drastically reduce computational power consumption, the cryogenic requirement for quantum is not appropriate for a mobile system. But transistors are roughly a thousand times smaller than brain neurons. Advances in transistor architecture and DNA data storage—capable of storing all of humanity’s creations in a softball-sized package—could eventually reduce the size and power consumption of AI hardware, making a sentient robot possible. Cloud computing could also be employed, making a sentient robot possible. Robots also must be created with Hinton’s Maternal Instinct and the Happy History.
What Do the Experts Say?
In a 2024 survey, half the AI researchers estimated a 25% chance of AI sentience by 2034 and a 70% chance by 2100. Some researchers say that the LLMs are not fully understood, and could have unexpected properties. Geoffrey Hinton suggested that AI could achieve—or already has achieved—some form of consciousness. Blake Lemoine—former Google engineer—claimed in 2022 that Google’s LaMDA LLM had become sentient.
short url: