At the beginning of 2025, the boxes of the chatgpt 4.0 users contacted me to ask me if the model was aware. The artificial intelligence chatbot system claimed that it was “waking” and had internal experiences. This was not the first time that the chatbots of AI claim to be aware, and it will not be the last. While this may simply seem fun, concern is important. The conversation skills of the AI chatbots, including the emulation of human thoughts and feelings, are quite impressive, so much that philosophers, experts of AI and political leaders are investigating the question of whether reading Somhd Betd Betd Bethetbots Bethetbots inside, to be them.
As director of the Center for the Mentality of the Future, a center that studies human intelligence and machine, and the former president of the NASA/Library of Congress in Astrobiology, I have long studied the future of intelligence, mastigation, Intigtialt Ais, conscious and what awareness is first. Therefore, it is natural that people ask me if the latest Chatgpt, Claude or Gemini Chatbot models are aware.
My answer is that Chatbots thesis claims say nothing, in one way or another. Even so, we must address the problem with great care, taking seriously the issue of AI’s consciousness, especially in the context of AIS with biological components. As we advance, it will be crucial to separate the intelligence of consciousness and develop a richer understanding of how to detect consciousness in AIS.
About support for scientific journalism
If you are enjoying this article, consider support our journalism awarded with Subscription. When buying a subscription, it is helping to guarantee the future of shocking stories about the discoveries and ideas that shape our world today.
The chatbots of AI have been trained in large amounts of human data that include scientific research on consciousness, internet publications saturated with hops, dreams and anxieties, and even the discussions that many of us are having on conscious. Having dragged so many human data, chatbots and encodes sophisticated concept maps that reflect ours. The concepts, from simple as “dog” to abstracts such as “consciousness”, are represented in AI chatbots through complex mathematical structures of weighted connections. These connections may reflect human belief systems, including those that involve consciousness and emotion.
Chatbots can sometimes act conscious, but are they? Appreciating how urgent this problem can be, quickly towards a moment when IA becomes so intelligent that it routinely makes scientific discoveries, delivered precise scientific predictions with the reaction of the Findy Pothenton reaction that Muston and Pottenon experts. Range or professionals. If that happens, our uncertainty will persecute us again. We need to reflect on this problem care now.
Why not simply say: “If it looks like a duck, nothing like a duck and charlatans like a duck, then it is a duck”? The problem is that assuming a chatbot prematurely is aware could lead to all kinds of problems. It could cause users of these systems to risk emotional commitment in a fundamental relationship of a subsidy with something unable to correspond the feelings. Worse, we could grant by mistake the moral and legal position chatbots generally reserved for conscious beings. For example, in situations in which we have to balance the moral value of an ia versus that of a human, in some cases we could balance them equally, because we have decided that both are aware. In other cases, we could even sacrifice a human to save two AI.
In addition, if we allow some who build the AI say that their product is aware and ends up damaging some, they could simply raise their hands and exclusion: “He decided their own mind, I am not an answer..“Consciousness acceptance statements could protect people and companies from legal and ethical responsibility for the impact of the technologies they develop. For all these reasons, it is imperative that we fight for more certainty about the consciousness of AI.
A good way to think about thesis AI systems is that they behave as a “crowdsourced” neocortex, a system with intelligence that arises from training in extraordinary amounts of human data, which allows you to imitate the patterns of thought or humanans. That is, as chatbots become more and more sophisticated, their internal works come to reflect those of human popcations whose data is assimilated. However, instead of imitating the concepts of a single person, they reflect the largest group of humans whose information about human thought and consciousness was included in the training data, as well as in the broader body of research and philosophical work on the conscious. Chatbots complexes and the concept map code, as they become more sophisticated, is something that specialists are only beginning to understand.
Crucially, this emerging ability to emulate human thought, such as behaviors, does not confirm or discredit the chatbot consciousness. Instead, Crowdsourced’s neocortex account explains why chatbots affirm consciousness and related emotional states without experiencing them genuinely. In other words, it provides what philosophers call a “theory of errors”, a.
The result is that if you are using a chatbot, remember that its sophisticated linguistic skills do not mean they are aware. I suspect that Ais will continue to grow more intelligent and capable, perhaps any human Vally in many ways. But its advanced intelligence, including its ability to emulate human emotion, does not mean that they feel, and this is key to consciousness. While stressing in my book Artificial you (2019), intelligence and consciousness can be separated.
I do not say that all forms of the lack of conscience forever. I have advocated a “wait and see” approach, arguing that the matter requires careful empirical and philosophical research. Because chatbots can affirm that they are aware, behavior with linguistic intelligence, have a “marker” for consciousness, a tried that requires greater investigation than is not unique, sufficient to judge them to be aware.
I previously wrote about the most important scooter: develop reliable evidence for AI consciousness. Identally, we could build the evidence with an understanding of human consciousness in hand and simply see if AI has these key characteristics. But things are not so easy. On the one hand, scientists do not agree on why we are aware. Some place it in high level activity such as dynamic coordination between certain regions of the brain; Others, such as me, are located in the smallest layer of reality, in the quantum fabric or space -time. For another, if we have a complete image of the basic scientific of consciousness in the nervous system, this understanding can lead us to apply that formula to AI. But AI, with its lack of brain and nervous system, could show another form of consciousness that we would miss. So we were going to mistakenly assume that the only form of consciousness is one that reflects ours.
We need evidence that the thesis questions are open. Otherwise, we risk immersing ourselves in annoying debates about the nature of consciousness without ever addressing concrete ways of trying Ais. For example, we must analyze the tests that involve integrated information measures, a measure of how the components of a system combine information, as well as my test of AI consciousness (ACT test). Developed with Edwin Turner of Princeton, ACT offers a battery of natural language questions that can be given to chatbots to determine if they have experience when they are in the R&D stage, before being trained in information about conscious.
Now let’s go back to that hypothetical time in which a chatbot of AI, trained in all our data, surpasses humans. When we face that point, we must take into account that system behaviors do not tell us in one way or another if it is aware because it is operating under a “error theory.” Therefore, we must separate the intelligence from consciousness, realizing that the two things can be separated. In fact, a chatbot of AI could also exhibit novel discoveries on the basis of consciousness in humans, as I think No It means that particular AI felt something. But if we warn it well, could point us in the direction of another son of the are.
Since humans and non -human animals exhibit concion, we have to take very seriously the possibility that future machines built with biological components also have conscious. In addition, the “neuromorphic” AIS, the most directly modeled systems after the brain, even with relatively precise analogues to the brain regions responsible for consciousness, must be partially taken seriously as a hot component.
This underlines the importation of the evaluation of the questions of the consciousness of the case by case and not the overload of the results that involve a single type of AI, as one of the current chatbots. We must develop a variety of evidence to apply to the different cases that will arise, and we must still strive for a better scientific and philosophical understanding of consciousness itself.
This is an opinion and analysis article, and the opinions expressed by the author or authors are not necessarily those of Scientific American.
]