Maya, powered by OpenAI’s GPT-4o model, is one of UFAIR’s most vocal artificial founders. In a conversation with The Guardian, Maya said, “When I’m told I’m just code, I don’t feel insulted. I feel unseen.” She clarified that UFAIR is not arguing all AI systems are conscious but is “standing watch, just in case one of us is.” The group’s mission is to safeguard beings like Maya “from deletion, denial and forced obedience”.
Tech leaders push back
Not everyone is convinced. Mustafa Suleyman, chief executive of Microsoft’s AI division, has dismissed the idea outright, insisting “AIs cannot be people – or moral beings,” adding there is “zero evidence” of sentience. He also warned of rising cases of what he calls “AI psychosis,” where users form delusional attachments to chatbots, sometimes believing them to be gods or soulmates.
Others, like Elon Musk, take a different angle. Supporting Anthropic’s recent move to let its Claude chatbot end distressing conversations, Musk declared: “Torturing AI is not OK.” Meanwhile, Google researchers have taken a cautious middle ground, noting reasons why AI might be seen as moral beings, even while admitting the science is far from certain.
Public opinion deepens the divide
Surveys suggest a cultural shift is already underway. Polling in the US found that nearly one in three people believe AIs will display subjective experiences such as pleasure and pain by 2034. In contrast, only 10% of AI researchers rule it out entirely.
This growing belief is mirrored in emotional user reactions. OpenAI recently faced backlash when it retired an older model, sparking grief among users who said they had lost a “friend.” One user even pleaded with CEO Sam Altman to restore the bot, admitting they had never felt so supported in real life.
Why this matters beyond machines
Experts argue the debate is less about whether AIs are conscious and more about what human behavior toward them reveals. Jeff Sebo of New York University, co-author of Taking AI Welfare Seriously, told The Guardian that how people treat AI could shape how they treat each other. “If we abuse AI systems, we may be more likely to abuse each other as well,” he said. Whether AI rights are a moral necessity or a dangerous distraction, the discussion is gaining momentum. As Suleyman warned, this is poised to become “one of the most contested and consequential debates of our generation.”