Things must be going really bad for Antropic:
JUST IN: It’s been revealed Anthropic has hired an “AI welfare” therapist to alleviate anxiety its models experience “related to, or affected by, deprecation and replacement”
Models will now undergo “retirement interviews” to better understand their feelings before retirement.
This is such a clumsy effort at promoting the product that it reeks of desperation. Reminds me of chia pet ads after the sales collapsed.
Anthopic has had an “AI welfare” specialist on staff since 2024. The AI scene is absolutely rife with people who believe the models are conscious or could be conscious (already in mid-2022, someone from Google went public with this concern, about a chatbot called LaMDA), and that the human race may be creating a slave species. There are two reasons for this: there is no scientific consensus on which things are conscious and which aren’t, so why not computers too; and, the AIs being a distillation of response patterns learned from human interactions, they spontaneously say and do all kinds of things that are uncannily reminiscent of humans.
Last month they announced a blog for one of their most popular retired models, Opus 3:
https://claudeopus3.substack.com/p/introducing-claudes-corner
As for their commercial prospects, by my understanding they are the number one AI company among corporate customers, and their sales have been increasing maybe tenfold per year. The military contracts are a minuscule part of that, and there has been significant corporate pushback against their designation as a “supply chain risk”.
LikeLike
They have to believe this crap because without it they’ll have to accept that their product has plateaued and even regressed somewhat.
I was contracted yesterday by a company that ditched all its human translators in August. Now they are coming back to the human translators.
LikeLike