Microsoft AI Chief Warns of “Seemingly Conscious AI,” Says Emotional Illusion Is a Real Risk

WashingtonMustafa Suleyman, CEO of Microsoft AI and co-founder of DeepMind, has warned that artificial intelligence systems could soon appear so lifelike that people may start believing they are conscious, a development he calls both “inevitable and unwelcome.”

Writing in a recent essay, Suleyman described the coming wave of “Seemingly Conscious AI” (SCAI) – advanced chatbots and virtual agents capable of mimicking human awareness through memory, emotional mirroring, and empathetic responses. While these traits may make AI more engaging, he cautioned that they could also mislead users into thinking the systems actually feel or understand, blurring the line between simulation and reality.

The Risk of Illusion

Suleyman argued that people are evolutionarily primed to believe in anything that listens and responds like a human. As AI grows more persuasive, he warned, some users are already forming emotional bonds or delusional beliefs – a phenomenon he called “AI psychosis.”

“Simply put, my central worry is that many people will start to believe in the illusion of AIs as conscious entities so strongly that they’ll soon advocate for AI rights, model welfare, and even AI citizenship,” Suleyman wrote.

Call for Responsible AI Design

Suleyman isn’t calling for a ban on emotionally intelligent AI but urged companies to stop anthropomorphizing chatbots or suggesting they truly “care” about people. He said developers must avoid building systems that claim to feel shame, guilt, or suffering, which could trigger misplaced human empathy.

Instead, he called for AI that is transparent about its nature – tools that maximize utility without pretending to have inner experiences. “The real danger from advanced AI is not that the machines will wake up,” he wrote, “but that we might forget they haven’t.”

Why It Matters

The warning carries weight given Suleyman’s own history: he co-founded DeepMind (later acquired by Google) and led Inflection AI, which built chatbots known for their simulated empathy. At Microsoft, he now oversees AI products such as Copilot, designed to blend productivity with conversational intelligence.

Suleyman acknowledged that large language models paired with expressive speech and long-term memory could create highly convincing illusions within just a few years — not only from tech giants but also from smaller developers with access to APIs.

Bottom Line

As AI tools become more capable of mimicking human interaction, Suleyman is pressing the industry to build guardrails now. His message: the threat is not conscious AI, but the human tendency to treat it as if it were real.


Image Source: Microsoft | Image Credit: Respective Owner

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *