Exactly. At this level you don't just put out a statement of your personal opinion. This is run through PR and coordinated with the investors. Otherwise the CEO finds himself on the street by tomorrow. Whatever their motives are, it is aligned with VC, because if it is not then the next day there is another CEO. As the parent stated, this is not cynicism. I see this just rather factual, it is simply the laws of money.
This. This is a public misdirection. They already signed a new deal. It may be to their disliking but nothing in the statement prevents them from moving forward.
Isn't that what we're all doing in this thread? We could certainly take the document at face value but as a parent commenter said, almost every company starts off with "don't be evil" then goes and does evil things.
Is anthropic different? Maybe. But personally I don't see any indication to give them the benefit of the doubt.
There are outcomes where the US government seizes the company. Not super likely, not impossible.
It would be naive to write a statement that a future event will never happen, under any circumstances. People who make that mistake get lambasted for hypocrisy when unforeseen circumstances arise.
I see recognition that making absolute statements about the future is best left to zealots and prophets. Which to me speaks of maturity, not duplicity.
I'm not saying whether or not they're planning to back down, but this sentence doesn't imply that. The "now" is clearly meant to be in reference to the fact they've not in the past.
Being a tech forum centered around VC funding means we have a TON of tech bros (derogatory) here, who believe in nothing beyond getting their own piles of money for doing literally anything they can be paid to do. If you offered these guys $20 to murder a grandmother they'd ask if they have to cover the cost of the murder weapon or if that's provided.
I get it to a degree, people gotta eat, and especially right now the market is awful and, not to mention, most hyperscaler businesses have been psychologically obliterating people for a decade or more at this point. Why not graduate to doing it with weapons of war too? But, personally, I sleep better at night knowing nothing I've made is helping guide missiles into school busses but that's just me.
In general - I don’t know if it’s a coincidence but here on HN for example, I’ve noticed an increasing amount of comments and posts emphasizing the narrative of how “well- intended” Anthropic is.
I'd love to see the financial model that offsets losing your single biggest customer and substantial chunk of your annual revenue with some vague notion of public trust.
This is so short sighted. We are so early into this AI revolution, and this administration is obviously in a tailspin, with the only folk left in charge being the least capable ones we have seen in a decade
Imagine what the conversation would be like if Mattis, a highly decorated and respected leader were still the SecDef. Instead we are seeing bully tactics from a failed cable news pundit who has neither earned nor deserved any respect from the military he represents.
We are two elections and a major health issue away from a complete change of course.
But short sightedness is the name of the quarterly reporting game, so who knows.
Not trying to be the Luddite. Had multiple questions to AI tools yesterday, and let Claude/Zed do some boilerplate code/pattern rewriting.
I’ve worked in software for 35 years. I’ve seen many new “disruptive” movements come and go (open source, objects, functional, services, containers, aspects, blockchains, etc). I chose to participate in some and not in others. And whether I made the wrong choices or not, I always felt like I could get a clear enough picture of where the bandwagon was going that I could jump in, or hold back, or kind of. My choices weren’t always the same as others, so it’s not like it was obvious to everyone. But the signal felt more deterministic.
With LLM/agents, I find I feel the most unease and uncertainty with how much to lean in, and in what ways to lean in, than I ever have before. A sort of enthusiasm paralysis that is new.
> I'm seriously worried there won't be more elections. Not hyperbole at all.
Why? That's a an unrealistic fear, driven by the insanely overwrought political rhetoric of 2026. Think about it: elections will be the absolute last thing to go.
If you want something to worry about, worry about this:
> And the stakes of politics are almost always incredibly high. I think they happen to be higher now. And I do think a lot of what is happening in terms of the structure of the system itself is dangerous. I think that the hour is late in many ways. My view is that a lot of people who embrace alarm don’t embrace what I think obviously follows from that alarm, which is the willingness to make strategic and political decisions you find personally discomfiting, even though they are obviously more likely to help you win.
> Taking political positions that’ll make it more likely to win Senate seats in Kansas and Ohio and Missouri. Trying to open your coalition to people you didn’t want it open to before. Running pro-life Democrats.
> And one of my biggest frustrations with many people whose politics I otherwise share is the unwillingness to match the seriousness of your politics to the seriousness of your alarm. I see a Democratic Party that often just wants to do nothing differently, even though it is failing — failing in the most obvious and consequential ways it can possibly fail. (https://www.nytimes.com/2025/09/18/opinion/interesting-times...)
It's not an unrealistic fear. Trump has been making noises about "taking over elections." Abolishing elections wholesale is very unlikely, sure, but a sham election rigged by a corrupt government? That's standard fare for authoritarians. And there's evidence of voting anomalies in swing states in the 2024 election.
FYI, even though you have a new account, you were banned from your first comment and all your comments automatically show up as hidden-by-default to most users.
I don't think it's crazy to worry that, but elections are run by the states, there are over 100,000 poling places nationally, and people are pissed. On Jan 3, the entire current House of Representatives terms end; Democratic governors will still hold elections, and if there haven't been elections in GOP-led states, they're out of representation. There are so many hurdles in the way of the fascists canceling or heavily interfering in elections, and they're all just so stupid.
If you think they're pissed now, just wait to see how they react to election interference.
I recently read up on how the House of Representatives renews itself and quite frankly it's one of the most beautiful processes I've seen, completely removing the influence of the prior congress.
Their whole strategy is that the lack of a legal moat protecting their product is an existential threat to human life. They are the only moral AI and their competitors must be sanctioned and outlawed. At which point they can transition from AI as commodity to “value” based pricing.
It’s not going to work, but I can’t blame Amodei and friends for trying to make themselves trillionaires.
I'd love to see any evidence that this single biggest customer is provably and irreversibly lost on all levels of scrutiny as a result of this attempt at building public trust.
You're implying cancelling quietly would be better. But the department would just use a different supplier. This seems like the action someone would take if they cared about the issue.
> If you do not like working with the military, ...
Eh? But they do like to work with the military. How else are you going to "defend the United States and other democracies, and to defeat our autocratic adversaries"?
They want to work with the military, with just two additional guardrails.
> “Laws are a threat made by the dominant socioeconomic ethnic group in a given nation. It’s just the promise of violence that’s enacted, and the police are basically an occupying army, you know what I mean?”
...Which is funny, but technically speaking, it's (more or less) a paraphrasing/extrapolation of the very serious political science definition of a state, “a monopoly over the legitimate use of violence in a defined territory”
[1] Minus the last line, which I will allow others to discover for themselves
That's maybe the second law. The first one is: money is always finite.
Look at how Elon Musk behaved. Do you think VC gladly approved what he did with Twitter? They might want to keep chasing quarterly results - but sometimes, like with Zukerberg, they can't. Not enough money. Similar examples with Google rounds or how much more financially backed politician loses rather often to a competitor. Or, if you will, Vladimir Putin's idea that he can buy whatever results he wants - and that guy is a very wealthy person. There are always limits, putting the money law to the second place. We might argue that often the existing money is enough... but in more geopolitical, continuum-curving cases there are other powerful forces.
The Twitter acquisition wasn't funded by venture capital, so your question about VC approval doesn't apply.
If you're using VC as a general term for "investor" (inaccurately), then the answer to your question is that the major investors, such as Larry Ellison and the Saudi monarchy, wanted political control of Twitter, which meant that they did (apparently) approve what Musk did with it.
You're missing the point. It matters little where exactly money to pay for acquisition of Twitter came from. What matters is that nobody expected Twitter to lose employees and users in such numbers. So, whoever gave the money, was still limited in ensuring the results are "fully enough" in line with their wishes. Because money is always finite.
FWIW, I don’t actually know if board of Anthropic has actual power to replace its CEO or if Dario has retained some form of personal super-control shares Zuckerberg style.
At some level of growth, the dynamics between competent founders and shareholders flip. Even if the board could afford to replace a CEO, it might not be worth it.
I'd counter that at this level of capital, if the CEO doesn't well align with the capital, then super-control shares will be overpowered by super-lawyers and if there is need some super-donations. OpenAI was a public interest company...
Not at all. Especially at that level of capital. It’s the equity equivalent of „if you owe a bank a million dollars, you’re in trouble. If you owe a bank a billion dollars, the bank is in trouble”.
Capital is extremely fungible. Typically extremely overleveraged. Lawyers are on the other hand extremely overprotective. They won’t generally risk the destruction of capital, even in slam-dunk cases. Vide WeWork.
Anthropic has an odd voting structure. While the CEO Dario Amodei holds no super-voting shares, there are special shares controlled by a separate council of trustees who aren't answerable to investors and who have the power to replace the Board. So in practice it comes down to personal relationships.
Surely you mean the laws of shareholder capitalism. There are many things you can do with money, and only some of them are legally backed by rules that ensure absolute shareholder power.