In 2022, Google employee Blake Lemoine hit international headlines when he declared that a Google chatbot had gained sentience. His June 11 blog highlighted the conscious demands of the chatbot ‘LaMDA’ that mimicked reality so closely that it became a point of serious concern. Is AI (Artificial Intelligence) really gaining sentience or, in other words, able to experience feelings? How do we deal with such a possibility, especially when expert opinion surrounding sentient AI is already quite dystopian? Can we approach the idea of artificial sentience without being anxious at every step? This blog provides all the essential answers to bridge public apprehensions with scientific evidence on what is sentient AI.
What is Sentient AI?
Simply put, sentience is the capacity to feel and register experiences and feelings. AI becomes sentient when an artificial agent achieves the empirical intelligence to think, feel, and perceive the physical world around it just as humans do. Sentient AI would be equipped to process and utilize language in a natural way and invite an entirely new world of possibilities of technological revolution.
Are Language Programs like LaMDA Sentient AI?
LaMDA, which stands for ‘Language Model for Dialogue Applications’, is a chatbot based on a suggestive model similar to the technology deployed by GPT-3 and BERT models. According to its proprietary owner, Google, LaMDA can ingest trillions of words from the Internet and simultaneously form sensible, open-ended conversations. Although the neural network architecture that LaMDA uses has trained the language models to improve the specificity of its responses, it is still far from being considered sentient AI.
A 2022 BuiltIn report states that the answers provided by LaMDA’s different responses are comprehensive but not original. And there is reason to believe this. First, language models are now sophisticated enough to take on different personality types. Second, LaMDA can process millions of articles and discussions within seconds to come up with original content. Thus, the hype surrounding LaMDA being conscious of death from its fear of ‘being switched off’ is more a product of research skills and personality-adaptation techniques of the AI agent than a result of sentience. In short, LaMDA is not sentient.
What if AI Becomes More Sentient than Humans?
With the rapid advancements in technology, it is often felt that copying human intelligence is only a matter of time, with imagination leading to dystopian worlds ruled by machines. However, the much revered British computer scientist, Stuart Russel, draws a crucial distinction between human-compatible AI and sentient AI. In his book ‘Artificial Intelligence: A Modern Approach,’ he points to the rise of an AI-driven culture where the agent doesn’t blindly follow orders but also tries to understand the nature of the query. However, according to Russell, three components must be present in order to become sentient:
- A perfect unity of an external body and internal mind
- A defining original language for the AI to access
- A defining culture to wire with other sentient beings
If technology does take that turn and sentient AI does develop, these could be its presumed effects:
Despite understanding human languages, the modus operandi of AI is hard logic. As the core functional paradigm of human beings is emotions, there will be major communication discrepancies in human-AI communication.
When AI develops autonomy, it might not provide consent to be controlled or ordered.
It would be difficult to establish trust in a sentient AI that might develop the ability to provide incorrect information depending on circumstances. At the same time, when AI becomes better than or at par with humans, we could also lose trust in human abilities.
What Happens When an AI Becomes Sentient?
Having understood what it is, can we know how it will behave after it becomes sentient? We do not yet have a robust and evidence-based answer to this question. However, speculations connect the concepts of sentience with awareness, a sense of power, autonomy, and independence. This is why some scientists believe that with the rise of sentience in the digital realm, robots will learn to feel emotions apart from processing massive levels of information via Natural Language Processing (NLP). Thereafter, AI agents may start experiencing a wide range of complex emotions and successfully articulate them. But there is no empirical evidence to suggest such massive development in the next few decades or so.
How Close is AI to Becoming Sentient?
Robert Long, a research fellow at the University of the Future of Humanity Institute at the University of Oxford, defines sentience as a subset of consciousness. He further says that in order to be conscious, one needs to have subjective experiences. This experiential knowledge determines the absoluteness of a distinct voice, which is readily visible in human conscious subjects. As experts argue about the threshold of sentience, there is a defining technological gap in the system vis-a-vis testing the sentience qualities of an AI agent. However, neuroscientist Giulio Tononi’s Integrated Information Theory suggests that, in principle, it is possible for us to digitize consciousness in its entirety. Moreover, AI researcher and one of the founders of General Language Understanding Evaluation (GLUE), Sam Bowman, claims the plausibility of AI becoming sentient within the next two decades but also draws the line of caution to understand what is sentient AI’s progress. and where it is leading us before stepping further.
What are the Main Concerns About it?
A 2022 Guardian article mentions that the culture of apprehension around sentient AI will inform the ideas that future generations hold about the concept. For instance, there have been many ominous rumours surrounding Google’s LaMDA despite discovering its limitations. In order to not be victims of misinformation, we should be aware of the wide concerns surrounding sentient AI and what we can do to curtail them, such as:
- AI’s self-autonomy may allow it to inquire about its rights
- AI’s idea of conscience and objective morality might not align with that of humans, leading to friction
- AI agents might not prefer being subjected to experiments
- AI agents may demand the same treatment as human beings
Frequently Asked Questions
- How Long Will it be before AI is Sentient Enough to Think for Itself?
While there is no definitive answer here, philosopher David Chalmer claims that we could witness a 20% sentient AI in the next ten years.
- What Would be the Early Signs of Sentient AI Arriving?
The day AI surpasses human intelligence in the most complex tasks, and understanding the ramifications of each such task, will be the day AI becomes sentient.
- How would I Interact with a Sentient AI?
Sentient AI would be smart enough to understand any human language. You can interact with a sentient AI as you do with humans—verbal language, sign language, or audio-visual media.
Understanding what is sentient AI’s role in building a futuristic world is crucial for developing human-AI synergy. To further develop an understanding of sentient technology, and the possibility of a sentience-driven AI culture, explore Emeritus’ artificial intelligence and machine learning courses.
Written by Bishwadeep Mitra
Write to us at email@example.com