Clicky

  • Login
  • Register
  • Submit Your Content
  • Contact Us
Tuesday, September 16, 2025
World Tribune
No Result
View All Result
  • Home
  • News
  • Business
  • Technology
  • Sports
  • Health
  • Food
Submit
  • Home
  • News
  • Business
  • Technology
  • Sports
  • Health
  • Food
No Result
View All Result
World Tribune
No Result
View All Result

OpenAI ChatGPT and Anthropic Claude chatbot usage studies may signal job losses ahead

September 16, 2025
in Business
Reading Time: 8 mins read
A A
OpenAI ChatGPT and Anthropic Claude chatbot usage studies may signal job losses ahead
0
SHARES
ShareShareShareShareShare

OpenAI ChatGPT and Anthropic Claude chatbot usage studies may signal job losses ahead

Hello and welcome to Eye on AI…In this edition: OpenAI and Anthropic detail chatbot usage trends…AI companies promise big investments in the U.K….and the FTC probes chatbots’ impact on kids.

READ ALSO

Facebook, TikTok and even LinkedIn are censoring abortion content even when it’s just medical inform

What CEOs think about the SEC ‘prioritizing’ Trump’s plan to end quarterly reporting for public companies

Yesterday saw the release of dueling studies from OpenAI and Anthropic about the usage of their respective AI chatbots, ChatGPT and Claude. The studies provide a good snapshot of who is using AI chatbots and what they are using them for. But the two reports were also a study in contrasts, with OpenAI clearly emerging as primarily a consumer product, while Claude’s use cases were more professionally oriented.

The ChatGPT study confirmed the huge reach OpenAI has, with 700 million active weekly users, or almost 10% of the global population, exchanging some 18 billion messages with the chatbot every week. And the majority of those messages—70%—were classified by the study’s authors as “non-work” queries. Of these, about 80% of the messages fell into three big categories: practical guidance, writing help, and seeking information. Within practical guidance, teaching or tutoring queries accounted for more than a third of messages. How many of these were students using ChatGPT to “help” with homework or class assignments was unclear—but ChatGPT has a young user base, with nearly half of all messages coming from those under the age of 26.

Educated professionals more likely to be using ChatGPT for work

When ChatGPT was used for work, it was most likely to be used by highly educated users working in high-paid professions. While this is perhaps not surprising, it is a bit depressing.

There is a vision of our AI future, one which I outline in my book, Mastering AI, in which the technology becomes a leveling force. With the help of AI copilots and decision-support systems, people with fewer qualifications or experience could take on some of the work currently performed by more skilled and experienced professionals. They might not earn as much as those more qualified individuals, but they could still earn a good middle-class income. To some extent, this already happens in law, with paralegals, and in medicine, with nurse practitioners. But this model could be extended to other professions, for instance accounting and finance—democratizing access to professional advice and helping shore up the middle class.

There’s another vision of our AI future, however, where the technology only makes economic inequality worse, with the most educated and credentialed using AI to become even more productive, while everyone else falls farther behind. I fear that, as this ChatGPT data suggests, that’s the way things may be heading.

While there’s been a lot of discussion lately of the benefits and dangers of using chatbots for companionship, or even romance, OpenAI’s research showed messages classified as being about relationships constituted just 2.4% of messages, personal reflection 1.9%, and role-playing and games 0.4%.

Interestingly, given how fiercely all the leading AI companies—including OpenAI—compete with one another on coding benchmarks and tout the coding performance of their models, coding was a relatively small use case for ChatGPT, constituting just 4.2% of the messages the researchers analyzed. (One big caveat here is that the research only looked at the consumer versions of ChatGPT—its free, premium, and pro tiers—but not usage of the OpenAI API or enterprise ChatGPT subscriptions, which is how many business users may access ChatGPT for professional use cases.)

Meanwhile, coding constituted 39% of Claude.ai’s usage. Software development tasks also dominated the use of Anthropic’s API.

Automation rather than augmentation dominates work usage

Read together, both studies also hinted at an intriguing contrast in how people were using chatbots in work contexts, compared to more personal ones.

ChatGPT messages classified as non-work related were more about what the researchers called “asking”—which involved seeking information or advice—as opposed to “doing” prompts, where the chatbot was asked to complete a task for the user. But in work-related messages, “doing” prompts were more common, constituting 56% of message traffic.

For Anthropic, where work-related messages seemed more dominant to begin with, there was a clear trend for users to ask the chatbot to complete tasks for them, and in fact the majority of Anthropic’s API usage (some 77%) was classified as automation requests. Anthropic’s research also indicated that many of the tasks that were most popular with business users of Claude also were those that were most expensive to run, indicating that companies are probably finding—despite some other survey and anecdotal evidence to the contrary—that the value of automating tasks with AI is indeed worth the money.

The studies also indicate that in business contexts people increasingly want AI models to automate tasks for them, not necessarily offer decision support or expert advice. This could have significant implications for economies as a whole: If companies mostly use the technology to automate tasks, the negative effect of AI on jobs is likely to be far greater.

There were lots of other interesting tidbits in the two studies. For instance, whereas previous usage data had shown a significant gender gap, with men far more likely than women to be using ChatGPT, the new study shows that gap has now disappeared. Anthropic’s research shows interesting geographic divergence in Claude usage too—usage is concentrated on the coasts, which is to be expected, but there are also hotspots in Utah and Nevada.

With that, here’s more AI news.

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

FORTUNE ON AI

China says Nvidia violated antitrust laws as it ratchets up pressure ahead of U.S. trade talks—by Jeremy Kahn

AI chatbots are harming young people. Regulators are scrambling to keep up.—by Beatrice Nolan

OpenAI’s deal with Microsoft could pave the way for a potential IPO—by Beatrice Nolan

EYE ON AI NEWS

Alphabet announces $6.8 billion investment in U.K.-based AI initiatives, other tech companies also announce U.K. investments alongside Trump’s state visit. Google’s parent company announced a £5 billion ($6.8 billion) investment in the U.K. over the next two years, funding AI infrastructure, a new $1 billion AI data center that is set to open this week, and more funding for research at Google DeepMind, its advanced AI lab that continues to be headquartered in London. The BBC reports that the investments were unveiled ahead of President Trump’s state visit to Britain. Many other big U.S. tech companies are expected to make similar investments over the next few days. For instance, Nvidia, OpenAI and U.K. data center provider Nscale also announced a multi-billion-dollar data center project this week. More on that here from Bloomberg. Meanwhile, Salesforce said it was increasing a previously announced package of investments in the U.K., much of it around AI, from $4 billion to $6 billion.

FTC launches inquiry into AI chatbot effects on children amid safety concerns. The U.S. Federal Trade Commission has started an inquiry into how AI chatbots affect children, sending detailed questionnaires to six major companies including OpenAI, Alphabet, Meta, Snap, xAI, and Character.AI. Regulators are seeking information on issues such as sexually themed responses, safeguards for minors, monetization practices, and how companies disclose risks to parents. The move follows rising concerns over children’s exposure to inappropriate or harmful content from chatbots, lawsuits and congressional scrutiny, and comes as firms like OpenAI have pledged new parental controls. Read more here from the New York Times.

Salesforce backtracks, reinstates team that helped customers adopt AI agents. The team, called Well-Architected, had displeased Salesforce CEO Marc Benioff by suggesting to customers that deploying AI agents successfully would take extensive planning and significant work, a position that contradicted Benioff’s own pitch to customers that, with Salesforce, deploying AI agents was a cinch. Now, according to a story in The Information, the software company has had to reconstitute the team, which provided advisory and consulting help to companies implementing Agentforce. The company is finding Agentforce adoption is lagging its expectations—with fewer than 5% of its 150,000 clients currently paying for the AI agent product, the publication reported—amid complaints that the product is too expensive, too difficult to implement, and too prone to accuracy issues and errors. Having invested heavily in the pivot to Agentforce, Benioff is now under pressure from investors to deliver.

Humanoid robotics startup Figure AI valued at $39 billion in new funding deal. Figure AI, a startup developing humanoid robots, has raised over $1 billion in a new funding round that values the company at $39 billion, making it one of the world’s most valuable startups, Bloomberg reports. The round was led by Parkway Venture Capital with participation from major backers including Nvidia, Salesforce, Brookfield, Intel, and Qualcomm, alongside earlier supporters like Microsoft, OpenAI, and Jeff Bezos. Founded in 2022, Figure aims to build general-purpose humanoid robots, though Fortune’s Jason del Rey questioned whether the company was exaggerating the extent to which its robots were being deployed with BMW.

EYE ON AI RESEARCH

Can AI replace my job? Journalists are certainly worried about what AI is doing to the profession. Mostly, though, after some initial concerns that AI would directly replace journalists, the concern has largely shifted to fears that AI will further undermine the business models that fund good journalism (see Brain Food below). But recently a group of AI researchers in Japan and Taiwan created a benchmark called NEWSAGENT to see how well LLMs can do at actually taking source material and composing accurate news stories. It turned out that the models could, in many cases, do an ok job.

But the most interesting thing about the research is how the scientists, none of whom were journalists, characterized the results. They found that Alibaba’s open weight model, Qwen-3 32B, did best stylistically, but that GPT 4-o did better on metrics like objectivity and factual accuracy. And they write that human-written stories did not consistently outperform those drafted by the AI models in overall win rates, but that the human-written stories “emphasize factual accuracy.” The human-written stories were also often judged to be more objective than the AI-written ones.

The problem here is that in the real world, factual accuracy is the bedrock of journalism, and objectivity would be a close second. If the models fall down on accuracy, they should lose in every case to the human-written stories, even if evaluators preferred the AI-written ones stylistically.

This is why computer scientists should not be left to create benchmarks for real world professional tasks without deferring to expert advice from people working in those professions.  Otherwise you get distorted views of what AI models can and can’t do. You can read the NEWSAGENT research here on arxiv.org.

AI CALENDAR

Oct. 6-10: World AI Week, Amsterdam

Oct. 21-22: TedAI San Francisco.

Nov. 10-13: Web Summit, Lisbon. 

Nov. 26-27: World AI Congress, London.

Dec. 2-7: NeurIPS, San Diego

Dec. 8-9: Fortune Brainstorm AI San Francisco. Apply to attend here.

BRAIN FOOD

Is Google the most malevolent AI actor? A lot of publishing execs are starting to say so. At Fortune Brainstorm Tech in Deer Valley, Utah, last week, Neil Vogel, the CEO of magazine publisher People Inc. said that Google was “the worst” when it came to using publishers’ content without permission to train AI models. The problem, Vogel said, is that Google used the same web crawlers to index sites for Google Search as it did to scrape content to feed its Gemini AI models. While other AI vendors have increasingly been cutting multi-million dollar annual licensing deals to pay for publishers’ content, Google has refused to do so. And publishers’ can’t block Google’s bots without losing search traffic on which they currently depend for revenue.
You can read more on Vogel’s comments here. 

Credit: Source link

ShareTweetSendSharePin
Previous Post

Trump’s willingness to let TikTok go dark motivated China

Related Posts

Facebook, TikTok and even LinkedIn are censoring abortion content even when it’s just medical inform
Business

Facebook, TikTok and even LinkedIn are censoring abortion content even when it’s just medical inform

September 16, 2025
What CEOs think about the SEC ‘prioritizing’ Trump’s plan to end quarterly reporting for public companies
Business

What CEOs think about the SEC ‘prioritizing’ Trump’s plan to end quarterly reporting for public companies

September 16, 2025
Appeals court rejects Trump’s bid to oust Lisa Cook from the Fed ahead of interest rate decision
Business

Appeals court rejects Trump’s bid to oust Lisa Cook from the Fed ahead of interest rate decision

September 16, 2025
Senate confirms Miran as Fed governor right before policy meeting
Business

Senate confirms Miran as Fed governor right before policy meeting

September 16, 2025
Google tops  trillion for the first time, joining select market-cap club with only 3 other members
Business

Google tops $3 trillion for the first time, joining select market-cap club with only 3 other members

September 16, 2025
DNA on towel wrapped around rifle near Charlie Kirk assassination matches suspect’s: FBI director
Business

DNA on towel wrapped around rifle near Charlie Kirk assassination matches suspect’s: FBI director

September 15, 2025

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

What's New Here!

U.S. and EU spell out tariffs for autos, pharmaceuticals, and more

U.S. and EU spell out tariffs for autos, pharmaceuticals, and more

August 21, 2025
Here’s how to watch Made by Google on August 20

Here’s how to watch Made by Google on August 20

August 18, 2025
Nolan McLean’s gem propels Mets to impressive sweep of Phillies

Nolan McLean’s gem propels Mets to impressive sweep of Phillies

August 28, 2025
Google Pixel 10 review: The new smartphone standard

Google Pixel 10 review: The new smartphone standard

August 27, 2025
CEOs at America’s 100 largest low-wage employers are paid 632 times more than the average worker, study finds

CEOs at America’s 100 largest low-wage employers are paid 632 times more than the average worker, study finds

August 21, 2025
The Future Is Frozen: How Cryogenics Is Redefining Food Innovation 

The Future Is Frozen: How Cryogenics Is Redefining Food Innovation 

August 22, 2025
From Hiring Crisis to Competitive Advantage: Q&A With AgHires on How Food Companies Can Win the War for Talent

From Hiring Crisis to Competitive Advantage: Q&A With AgHires on How Food Companies Can Win the War for Talent

September 15, 2025

About

World Tribune is an online news portal that shares the latest news on world, business, health, tech, sports, and related topics.

Follow us

Recent Posts

  • OpenAI ChatGPT and Anthropic Claude chatbot usage studies may signal job losses ahead
  • Trump’s willingness to let TikTok go dark motivated China
  • The first Roku-powered smart projector is here
  • Facebook, TikTok and even LinkedIn are censoring abortion content even when it’s just medical inform

Newslatter

Loading
  • Submit Your Content
  • Contact Us
  • Privacy Policy
  • Terms of Use
  • DMCA

© 2024 World Tribune - All Rights Reserved!

No Result
View All Result
  • Home
  • News
  • Business
  • Technology
  • Sports
  • Health
  • Food

© 2024 World Tribune - All Rights Reserved!

Welcome Back!

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Fill the forms below to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In