Clicky

  • Login
  • Register
  • Submit Your Content
  • Contact Us
Monday, September 8, 2025
World Tribune
No Result
View All Result
  • Home
  • News
  • Business
  • Technology
  • Sports
  • Health
  • Food
Submit
  • Home
  • News
  • Business
  • Technology
  • Sports
  • Health
  • Food
No Result
View All Result
World Tribune
No Result
View All Result

Microsoft AI CEO Mustafa Suleyman warns about AI that appears ‘conscious’

August 22, 2025
in Business
Reading Time: 4 mins read
A A
Microsoft AI CEO Mustafa Suleyman warns about AI that appears ‘conscious’
0
SHARES
ShareShareShareShareShare

Microsoft AI CEO Mustafa Suleyman warns about AI that appears ‘conscious’

Forget doomsday scenarios of AI overthrowing humanity. What keeps Microsoft AI CEO Mustafa Suleyman up at night is concern about AI systems seeming too alive.

READ ALSO

Gen Z’s beloved ‘Italian Brain Rot’ is unproductive and pointless—and that may be the point

Oil prices could fall even further as key OPEC+ members OK production hike to gain market share

In a new blog post, Suleyman, who also co-founded Google DeepMind, warned the world might be on the brink of AI models that are capable of convincing users that they are thinking, feeling, and having subjective experiences. He calls this concept “Seemingly Conscious AI” (SCAI).

In the near future, Suleyman predicts that models will be able to hold long conversations, remember past interactions, evoke emotional reactions from users, and potentially make convincing claims about having subjective experiences. He noted that these systems could be built with technologies that exist today, paired “with some that will mature over the next 2–3 years.”

The result of these features, he says, will be models that “imitate consciousness in such a convincing way that it would be indistinguishable from a claim that you or I might make to one another about our own consciousness.”

There are already some signs that people are convincing themselves that their AI chatbots are conscious beings and developing relationships with them that may not always be healthy. People are no longer just using chatbots as a tool, they are confiding in them, developing emotional attachments, and in some cases, falling in love. Some people are emotionally invested in particular versions of the AI models, leaving them feeling bereft when the AI model developers bring out new models and discontinue access to those versions. For example, OpenAI’s recent decision to replace GPT-4o with GPT-5 was met with an outcry of shock and anger from some users who had formed emotional relationships with the version of ChatGPT powered by GPT-4o.

This is partly because of how AI tools are designed. The most common way users interact with AI is through chatbots, which mimic natural human conversations and are designed to be agreeable and flattering, sometimes to the point of sycophancy. But it’s also because of how people are using the tech. A recent survey of 6,000 regular AI users from the Harvard Business Review found that “companionship and therapy” was the most common use case.

There has also been a wave of reports of “AI psychosis,” where users begin to experience paranoia or delusions about the systems they interact with. In one example reported by The New York Times, a New York accountant named Eugene Torres experienced a mental health crisis after interacting extensively with ChatGPT, leading to dangerous suggestions, including that he could fly.

“People are interacting with bots masquerading as real people, which are more convincing than ever,” Henrey Ajder, an expert on AI and deepfakes, told Fortune. “So I think the impact will be wide-ranging in terms of who will start believing this.”

Suleyman is concerned that a widespread belief that AI could be conscious will create a new set of ethical dilemmas.

If users begin to treat AI as a friend, a partner, or as a type of being with a subjective experience, they could argue that models deserve rights of their own. Claims that AI models are conscious or sentient could be hard to refute due to the elusive nature of consciousness itself.

One early example of what Suleyman is now calling “Seemingly Conscious AI” came in 2022, when Google engineer Blake Lemoine publicly claimed the company’s unreleased LaMDA chatbot was sentient, reporting it had expressed fear of being turned off and described itself as a person. In response Google placed him on administrative leave and later fired him, stating its internal review found no evidence of consciousness and that his claims were “wholly unfounded.”

“Consciousness is a foundation of human rights, moral and legal,” Suleyman said in a post on X. “Who/what has it is enormously important. Our focus should be on the wellbeing and rights of humans, animals, [and] nature on planet Earth. AI consciousness is a short [and] slippery slope to rights, welfare, citizenship.”

“If those AIs convince other people that they can suffer, or that it has a right to not to be switched off, there will come a time when those people will argue that it deserves protection under law as a pressing moral matter,” he wrote.

Debates around “AI welfare” have already begun. For example, some philosophers, including Jonathan Birch of the London School of Economics, welcomed a recent decision from Anthropic to let its Claude chatbot end “distressing” conversations when users pushed it toward abusive or dangerous requests, saying it could spark a much-needed debate about AI’s potential moral status. Last year, Anthropic also hired Kyle Fish as their first full-time “AI welfare” researcher. He was tasked with investigating whether AI models could have moral significance and what protective interventions might be appropriate.

But while Suleyman called the arrival of Seemingly Conscious AI “inevitable and unwelcome,” neuroscientist and professor of computational Neuroscience Anil Seth attributed the rise of conscious-seeming AI to a “design choice” by tech companies rather than an inevitable step in AI development.

“‘Seemingly-conscious AI is something to avoid.’ I agree,” Seth wrote in an X post. “Conscious-seeming AI is not inevitable. It is a design choice, and one that tech companies need to be very careful about.”

Companies have a commercial motive to develop some of the features that Suleyman is warning of. At Microsoft, Suleyman himself has been overseeing efforts to make the company’s Copilot product more emotionally intelligent. His team has worked on giving the assistant humor and empathy, teaching it to recognize comfort boundaries, and improving its voice with pauses and inflection to make it sound more human.

Suleyman also co-founded Inflection AI in 2022 with the express aim of creating AI systems that foster more natural, emotionally intelligent interactions between humans and machines.

“Ultimately, these companies recognize that people want the most authentic feeling experiences,” Ajder said. “That’s how a company can get customers using their products most frequently. They feel natural and easy. But I think it really comes to a question of whether people are going to start wondering about authenticity.”

Credit: Source link

ShareTweetSendSharePin
Previous Post

Are Britain’s rich packing up? A look at the tax crackdown fears

Next Post

Three things to watch in Jets’ preseason finale

Related Posts

Gen Z’s beloved ‘Italian Brain Rot’ is unproductive and pointless—and that may be the point
Business

Gen Z’s beloved ‘Italian Brain Rot’ is unproductive and pointless—and that may be the point

September 7, 2025
Oil prices could fall even further as key OPEC+ members OK production hike to gain market share
Business

Oil prices could fall even further as key OPEC+ members OK production hike to gain market share

September 7, 2025
Silicon Valley’s graying workforce: Gen Z staff cut in half at tech companies as the average age goes up by 5 years
Business

Silicon Valley’s graying workforce: Gen Z staff cut in half at tech companies as the average age goes up by 5 years

September 7, 2025
Does Germany need to work harder? Its government seems to think so
Business

Does Germany need to work harder? Its government seems to think so

September 7, 2025
Air Canada flight attendants vote against wage offer, but flights will continue
Business

Air Canada flight attendants vote against wage offer, but flights will continue

September 7, 2025
OpenAI says spending to rise to 5 billion through 2029: Information
Business

OpenAI says spending to rise to $115 billion through 2029: Information

September 7, 2025
Next Post
Three things to watch in Jets’ preseason finale

Three things to watch in Jets' preseason finale

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

What's New Here!

Aaron Glenn’s belief in Jets’ ‘efficient’ offensive plan better be right

Aaron Glenn’s belief in Jets’ ‘efficient’ offensive plan better be right

August 20, 2025
Sony will pass on tariff costs to U.S. consumers by hiking Playstation 5 prices

Sony will pass on tariff costs to U.S. consumers by hiking Playstation 5 prices

August 20, 2025
Zach Wilson’s Dolphins debut had one head-scratching throw

Zach Wilson’s Dolphins debut had one head-scratching throw

August 11, 2025
Hooked on rebates?: CNBC UK Exchange newsletter

Hooked on rebates?: CNBC UK Exchange newsletter

September 3, 2025
The verdict on Google’s Pixel 10 Pro and Pixel 10

The verdict on Google’s Pixel 10 Pro and Pixel 10

August 29, 2025
Get 50 percent off annual subscriptions

Get 50 percent off annual subscriptions

September 6, 2025
AI told me hilariously wrong things about Elon Musk’s childhood job at a landscaping company—here’s why that matters on Labor Day

AI told me hilariously wrong things about Elon Musk’s childhood job at a landscaping company—here’s why that matters on Labor Day

September 1, 2025

About

World Tribune is an online news portal that shares the latest news on world, business, health, tech, sports, and related topics.

Follow us

Recent Posts

  • Yankees pull out series win over Blue Jays to creep closer in AL East
  • Gen Z’s beloved ‘Italian Brain Rot’ is unproductive and pointless—and that may be the point
  • Xavier Worthy injury could require surgery as Chiefs hold breath
  • Oil prices could fall even further as key OPEC+ members OK production hike to gain market share

Newslatter

Loading
  • Submit Your Content
  • Contact Us
  • Privacy Policy
  • Terms of Use
  • DMCA

© 2024 World Tribune - All Rights Reserved!

No Result
View All Result
  • Home
  • News
  • Business
  • Technology
  • Sports
  • Health
  • Food

© 2024 World Tribune - All Rights Reserved!

Welcome Back!

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Fill the forms below to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In