IBM - International Business Machines Corporation

08/28/2025 | News release | Archived content

How to stop AI from seeming conscious

The design challenge of demystifying AI that seems human is not new. In the 1960s, the psychologist Joseph Weizenbaum created ELIZA , an early program that mimicked a psychotherapist. Even though ELIZA relied only on simple pattern-matching, many users reported feeling understood. Weizenbaum himself was startled by the intensity of those reactions, and he spent much of his later career warning about the dangers of anthropomorphizing software.

Today's systems are exponentially more sophisticated. Where ELIZA used canned phrases, modern language models can generate long, context-aware responses, adopt emotional tones and remember conversations across sessions. Digital avatars add gestures and expressions. Each advance makes the illusion more powerful.

Education can help, too, Rossi said, by reminding users that no matter how fluent the words, they are not coming from a mind. At IBM, she added, AI is deployed in professional settings where training and onboarding help reinforce that distinction.

"Our solutions are for specific purposes, like helping someone in a bank or government agency do their job better," she said. "We can train users to understand that the purpose is not to replace a human collaborator, but to help them with a certain task."

Consumer chatbots are different. They reach billions of people, often with little guidance beyond a terms-of-service click. "It is not that easy to train everybody," Rossi said. "People use it for anything-health recommendations, mental health, advice on life challenges."

Some researchers propose adding reminders directly into chatbot interfaces, such aslabels within chat windows or pop-up notices that clarify that the user is interacting with software. Others have suggested limiting memory across sessions so that chatbots are less likely to appear as enduring personas with lasting awareness.

Rossi said AI systems present themselves as consistent personalities, making it easier for users to form emotional bonds that feel real even if the connection is not. She pointed to the reaction when GPT-4 was phased out, noting that some users responded as if they had lost a trusted companion. "People said, 'I don't want to lose this model because it helped me in difficult life situations.' They felt as if they had lost a friend," she said.

Psychologists warn that AI companions can deepen isolation, offering comfort in the moment but not substituting for real human connection. Suleyman has gone further, warning that some users could even push for AI citizenship, the idea that highly advanced systems might deserve legal rights or social recognition as entities.

Rossi dismissed that idea, calling it a distraction from the real safeguards the industry needs. "These machines should not be thought of as human beings," she said. "They are very useful to human beings, but they are not human."

If the ethical debate veers toward rights, Rossi said, the industry will lose sight of the practical safeguards it needs to put in place now. "Consciousness, to me, is not even a question worth addressing scientifically. Intelligence can be tested from the outside," she added. "Consciousness cannot. What matters is the perception."

However, her view echoes Suleyman's broader point: the risk is not that AI develops minds, but that people become convinced that it has.

The conversation also connects to a principle that Rossi says guides IBM's work: the company's first principle of AI ethics is that "AI must augment human intelligence," not replace it. "This implies that AI is not like a human being," she said. "It is just an assistant, or an agent." She extends that view to the larger purpose of technology. "Humanity should build and use technology to advance, grow, become wiser and thrive. AI being perceived as conscious does not seem to lead us there."

IBM - International Business Machines Corporation published this content on August 28, 2025, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on September 03, 2025 at 16:04 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]