> 2. In the ~2010-2022 timeframe, tech companies poured all this money into speculative bets
Any data/sources on which this might be based? The pandemic was 6 years ago; do these "Agile" (the tech term) companies really carry many unproductive lines-of-business for so long?
> speculative bets that never went anywhere ... think Amazon's Alexa devices division, Google Stadia, and perhaps most famously the Metaverse itself
Organizations make speculative bets all the time. Is there an accounting of the profitability of Alexa/Nest etc.?
> end of the ZIRP era would have caused companies to kill these inherently unprofitable projects
if you plug in the years 2020-2026 in the Fed Rate - Unemployment chart here at [1], it shows that from 2020 - 2022, rates were near zero while unemployment spiked during Covid and then fell. From 2022 through 2023, rates rose sharply while unemployment stayed relatively low. 2024-2025 the labor market softened. You can add the Federal Funds Effective Rate and the Unemployment Rate easily through the menu.
Unemployment stayed low through the rise in rates for almost two years prior to 2024. Given that companies operate on a quarterly reporting basis and program/project decisions are at least on that cadence, I don't think that the line you're suggesting that Rates-Go-Up -> Projects-Get-Killed -> Layoffs-Increase quite lines up with the economy-wide data in this exceptional case of 2022-2023.
We may have to look elsewhere for the reasons behind the current labor market weakness ... cough..*economy*..*trade walls*..cough...*structural re-alignment* [2]...cough...
How many hall passes do CEOs get for pandemic over hiring? Its been four years, and four years of layoffs and hiring to go back to layoffs. Pandemic over hiring might explain the first layoff or two a year or so later, but not the tenth or so layoff nearly half a decade later.
It's the principal/agent problem: The overhiring is incredibly difficult to unwind because it led to the creation of empires. Everyone who is not a founder has more incentive to keep the empire because it props up their comp.
I see this firsthand. Execs who are openly hostile to and distrusting of their own middle management.
These were supposed to be farsighted geniuses who took credit for their stock performance. But they made the most obvious blunder: opening a Pandora’s box that they can’t close.
In 2023, Meta did two layoffs, one of 10,000 (another 13.6%), one of 600
In 2024, Meta did a layoff (layoffs.fyi doesn't list the number)
In 2025, Meta did three layoffs, one of 3,600 (5%), 100 and 600 people
In 2026, Meta has done two layoffs, one of 200, and one of 8,000 (10%).
If you can't correct overhiring in 4 years as a CEO, you've failed. If you repeatedly have to do >10% layoffs every year for 4 years you've failed (if it's to correct for a one time hiring spree).
Meta's employment didn't skyrocket during the pandemic making this argument even more bullshit. Between 2012 and 2018 Meta averaged an employee count growth of 40% YoY. In 2020, they grew by 31%, 2021 22% and 20% in 2022. Meta slowed their growth during the pandemic.
Using relative comparison is not appropriate for a software company, which is incredibly scalable by nature. Yoy is not the appropriate metric here. These are not workers on the line in a widget factory.
Just to play devils advocate here - but from 2022-2026 did they ever have any large hiring cycles that would replace some or most of these layoffs? Or should I believe none of the layoffs in the last four years have been replaced at all?
I do agree with your point about overhiring, its been way to long and this continues to be an excuse without any real evidence to back it up.
Whether it's bullshit or not, does that even matter? Meta's CEO, Mark Zuckerberg, has majority voting rights. This is not a secret. Everyone who considers working for or investing in Meta is aware that hiring and firing decisions are subject to his whims.
> Any data/sources on which this might be based? The pandemic was 6 years ago; do these "Agile" (the tech term) companies really carry many unproductive lines-of-business for so long?
Do big tech companies like FB and Google even pretend to be "agile" anymore? I think they mostly sell themselves on institutional stability and monopolist market positions rather than speed of execution
Did they ever pretend, and anyone ever believed that? "Agile" organization is even more of a bullshit concept than "Agile" in the team.
Excepting for trivial-size, freshly formed startups, companies cannot be "Agile", because finance and legal and HR and even marketing have constrains setting the tempo - you cannot just drive them with a sprint as if it was a clock signal.
> "Agile" organization is even more of a bullshit concept than "Agile" in the team.
> Excepting for trivial-size, freshly formed startups, companies cannot be "Agile", because finance and legal and HR and even marketing have constrains setting the tempo - you cannot just drive them with a sprint as if it was a clock signal.
Implementations of Agile at different companies can be an issue, yes. But that is to be expected in any large organization, simply because of scale. It doesn't change the fact that the on-the-ground teams at agile orgs work to a different cadence and approach than historically traditionally structured companies.
There are a few different ways to manage interfacing with parts of the org that need to march to a different beat. That always creates friction, and has to be managed properly. Any large org can suffer from hubris, middling management skills and capacity, wasted effort. Problems of scale, I guess.
Enterprise methodologies like Scaled Agile Framework (SAFe) are explicitly designed around managing that friction. Developers may complain about the additional toil and process overhead it imposes on them, but for the organization as a whole this is sometimes the least bad option.
Yes, they do. In fact, they have several competing committees to market the concepts internally through armies of PMs and TPMs and KPIs and efficiency metrics tracking. Also your AI token use of course.
I was in two different Google orgs and neither one used any sort of agile methodology at all, and in fact I almost never heard agile even discussed. Some teams arranged their bugs "kanban"-style but that was up to the individual team; it wasn't a company-wide decision.
> Do big tech companies like FB and Google even pretend to be "agile" anymore?
Folks from those companies will have to speak up, but my understanding is that yes, internally these large tech orgs use the Agile Methodology, as opposed to the 'traditional' 'Waterfall' development methods.
>> If you pay attention to cats, you figure out they are fuzzy little “difference engines.”
> That must be a mechanism in the brain rather than the eye
Check out "A Thousand Brains: A New Theory of Intelligence" [1] by Jeff Hawkins [2], of PalmPilot fame. This theory postulates, in part, and with evidence, that brains are continuously comparing sensory input and movement context with learned models. I found the book to be mind-blowing, so to speak ...
From Eric Hartford at Lazarus-AI [1]: "Clearwing is a fully open-source vulnerability discovery engine. Crash-first hunting, file-parallel agents, oracle-driven verification, variant hunting, adversarial verification. Works with any LLM."
"I tested it with OpenAI Codex 5.4 and reproduced Glasswing's findings. I'm now reproducing results with our own ReAligned model - Qwen3.5 finetuned to Western alignment."
"Mythos is certainly a great model. The N-day exploit walkthroughs in Anthropic's blog show real reasoning depth. But it's an incremental improvement..." "The real innovation isn't the model. It's the workflow:
- Rank every file in a codebase by attack surface
- Fan out hundreds of parallel agents, each scoped to one file
- Use crash oracles (AddressSanitizer, UBSan) as ground truth
- Run a second verification agent to filter noise
- Generate exploits as a triage mechanism for severity
That's a pipeline. And pipelines are model-agnostic."
Disclaimer: I'm not affiliated with Eric/Lazarus in any way.
While this model set (GPT-Rosalind) is limited to certain organizations, the announcement also included the release of a Life Sciences Plugin, which is more broadly available on Codex [1].
> Americans clearly don't believe in science anymore
There's about a third that lean that way or atleast they don't care, and they have gained control of the government because of various factors, namely,
part of the middle third disillusioned with economics (left behind) and wanting a change,
another part of the middle third staying home because of geopolitics,
and yet another part of the middle third falling prey to media biased by right-wing billionaire/corporatist capture.
Any suggestions for a long-term fix for this problem?
I don't know what the long term fix is, because that presupposes the ability to plan long term, which is something I don't think the US is capable of anymore.
Part of the solution has to be breaking down the aggressive selfishness and individualism of American society and establishing the ideal of a common American cultural identity and civic duty. This used to exist, but only within the framework of racial and cultural homogeneity. We need that but without the Christian nationalism and white supremacy. That means Americans will have to believe in society and government and each other, rather than only their own immediate interests. It means some dirty words for Americans, a bit more "socialism" and "multiculturalism", maybe "regulations" and "taxes."
We need strong science and civics curriculum in our schools, which means we need to fund schools, which means we need to stop seeing schools as dens of atheist communist mind control, which will be a problem for a lot of the country. We need to establish separation of church and state as an explicit Constitutional principle. We need to remove tax exempt status for religious institutions. We need to repeal the Electoral College so that conservative Christian votes don't count more than everyone else. I don't think that keeping slave states in the union is still a problem worth worrying about.
But I don't know. How do you make people give a damn? How do you convince people that an objective reality exists? How do you convince people that empathy isn't a sin? Maybe it's just a generational thing. Maybe enough bastards just need to die out.
> the ability to plan long term, which is something I don't think the US is capable of anymore.
It may seem that way, but this lack is temporary until the pendulum swings back the other way. What is needed some mechanism to keep progress and planning going even when the pendulum is unfavorable.
> the aggressive selfishness and individualism of American society
It's an error to think the loudest voices are the majority. Also, selfishness and individualism are not necessarily cojoined twins, though it may seem that way at the moment. Americans are generous with their time and money as one can see from donation stats. [1] The comparative data at [2] is especially eye-opening.
> This used to exist, but only within the framework of racial and cultural homogeneity
This might be a myth. See [2]. Also, cooperative/pro-social behaviors are well documented across a spectrum of biological species, including humans. It might be innate to structured biological life, individual pathologies notwithstanding. "Society" is a thing, after all.
> It means some dirty words for Americans
I think this is an artifact of media capture. We the people need to wrest back control of the medium.
> Maybe enough bastards just need to die out.
There's always new ones being minted, unfortunately. Hence the need for a long-term solution.
> How do you make people give a damn?
Maybe we just need to organize those who do. Any suggestions how?
When you frame it like that it sounds like some kind of vanguard of class-conscious people should try to rebel and establish a, I don't know, dictatorship of the proletariat? Maybe they could give themselves some kind of Russian name to sound cool.
> When you frame it like that it sounds like some kind of vanguard of class-conscious people
Less class-conscious and more reality-conscious - there's always going to be a group that's anti-science/anti-rationality because of religion, views, etc. It's when they get into power and stop the progress of science that it becomes an issue.
> should try to rebel and establish a, I don't know, dictatorship of the proletariat?
No need for anything quite as drastic. And that would be effective only for a duration of time until the pendulum swings the other way. Also, I'm sure from the anti-science folks' perspective it's the pro-science folks that are oppressive when the latter are in government.
There must be some long-term solution to insulate science from the swings of the pendulum, without devolving into chaos or oppression. Maybe the internet hive-mind can brainstorm a solution. We also need a forum where like-minded people can have this discussion without getting downvoted into oblivion. Any options?
> since the only difference is the user's intentions
Have these been banned yet: dual-use kitchen items, actual weapons of war for consumer use, dual-use garden chemicals, dual-use household chemicals etc. etc? Has human cybersecurity research stopped? Have malware authors stopped research?
No? then this sounds more like hype than real reasons.
There's also the possibility that there's a singular anthropic individual who's gained a substantial amount of internal power and is driving user-hostile changes in the product under the guise of cybersecurity.
> and people don't come in droves. Because the product is noticeably worse.
As of Oct 2025, it appears that openai markets share is 15x that of anthropic: 60% vs 3.5% [1].
As of April 2026, openai has 900 million weekly users [2] while anthropic has 300 million monthly users [1].
As of March 2026, openai app downloads were 2.2 million per day, while anthropic app downloads were 340,000. openai mobile users were 248 million per day, while anthropic mobile users were 9.4 million. In Feb 2026, chatgpt had 5.4 billion web visits, while claude had 290 million web visits. [3]
It seems to me that openai operates at a much higher scale than anthropic. Since you used droves as a proxy for product quality, by that standard anthropic has a much more inferior product. :)
Sir, this is ~a Wendy's~ talking about paid use for agentic usecases, especially as individuals for work or as small-medium sized companies. Not about people asking chat their horosocope for the next week. Yes OpenAI still has the horoscope market in tight control, great for them. Do read the room please.
There's a viral video floating around showing Indian factory workers using head-mounted cameras to capture data for training AI Robots. This article has some details on that. The viral video itself is an unsourced reddit post, unfortunately. [1]
Jesting aside, OpenHub lists Linus Torvalds as having made 46,338 commits. 45,178 for Linux, 1,118 for Git. His most recent commit was 17 days ago. [1]
That is a far cry from a vibe-coder, no? :-)
Bit unfair to call his leadership vibe-coding, methinks.
> Just like stealing fractional amounts of money[3] should not be legal, violating the licenses of the training data by reusing fractional amounts from each should not be legal either.
I think you'll find that this is not settled in the courts, depending on how the data was obtained. If the data was obtained legally, say a purchased book, courts have been finding that using it for training is fair use (Bartz v. Anthropic, Kadrey v. Meta).
Morally the case gets interesting.
Historically, there was no such thing as copyright. The English 1710 Statute of Anne establishing copyright as a public law was titled 'for the Encouragement of Learning' and the US Constitution said 'Congress may secure exclusive rights to promote the progress of science and useful arts'; so essentially public benefits driven by the grant of private benefits.
The Moral Bottomline: if you didn't have to eat, would you care about who copies your work as long as you get credited?
The more the people that copy your work with attribution, the more famous you'll be. Now that's the currency of the future*. [1]
> The Moral Bottomline: if you didn't have to eat, would you care about who copies your work as long as you get credited?
Yes.
I have 2 issues with "post-scarcity":
- It often implicitly assumes humanity is one homogeneous group where this state applies to everyone. In reality, if post-scarcity is possible, some people will be lucky enough to have the means to live that lifestyle while others will still by dying of hunger, exposure and preventable diseases. All else being equal, I'd prefer being in the first group and my chance for that is being economically relevant.
- It often ignores that some people are OK with having enough while others have a need to have more than others, no matter how much they already have. The second group is the largest cause of exploitation and suffering in the world. And the second group will continue existing in a post-scarcity world and will work hard to make scarcity a real thing again.
---
Back to your question:
I made the mistake of publishing most of my public code under GPL or AGPL. I regret is because even though my work has brought many people some joy and a bit of my work was perhaps even useful, it has also been used by people who actively enjoy hurting others, who have caused measurable harm and who will continue causing harm as long as they're able to - in a small part enabled by my code.
Permissive licenses are socially agnostic - you can use the work and build on top of it no matter who you are and for what purpose.
A(GPL) is weakly pro-social - you can use the work no matter what but you can only build on top of it if you give back - this produces some small but non-zero social pressure (enforced by violence through governments) which benefits those who prefer cooperation instead of competition.
What I want is a strongly pro-social license - you can use or build on top of my work only if you fulfill criteria I specify such as being a net social good, not having committed any serious offenses, not taking actions to restrict other people's rights without a valid reason, etc.
There have been attempts in this direction[0] but not very successful.
In a world without LLMs, I'd be writing code using such a license but more clearly specified, even if I had to write my own. Yes, a layer would do a better job, that does not mean anything written by a non-lawyer is completely unenforceable.
With LLMs, I have stopped writing public code at all because the way I see it, it just makes people much richer than me even richer at a much faster rate than I can ever achieve myself. Ir just makes inequality worse. And with inequality, exploitation and oppression tends to soon follow.
> In reality, if post-scarcity is possible, some people will be lucky enough to have the means to live that lifestyle while others will still by dying of hunger, exposure and preventable diseases.
By definition, that's not a post-scarcity world; and that's already today's world.
> It often ignores that some people are OK with having enough while others have a need to have more than others, no matter how much they already have.
Do you think that's genetic, or environmental? Either way, maybe it will have been trained out of the kids.
> it has also been used by people who actively enjoy hurting others, who have caused measurable harm
Taxes work the same way too. "The Good Place" explores these second-order and higher-order effects in a surprisingly nuanced fashion.
Control over the actions of others, you have not. Keep you from your work, let them not.
> What I want is a strongly pro-social license - you can use or build on top of my work only if you fulfill criteria I specify such as being a net social good
These are all things necessary in a society with scarcity. Will they be needed in a post-scarcity society that has presumably solved all disorder that has its roots in scarcity?
> With LLMs, I have stopped writing public code at all because the way I see it, it just makes people much richer than me even richer at a much faster rate than I can ever achieve myself.
Yes, the futility of our actions can be infuriating, disheartening, and debilitating. Comes to mind the story about the chap that was tossing washed-ashore starfish one by one. There were thousands. When asked why do this futile task - can't throw them all back- he answered as he threw the next ones: it matters to this one, it matters to this one, ...
Hopefully, your code helped someone. That's a good enough reason to do it.
You probably imagine some Brave New World kind of conditioning. Not to mention, those people will want their kids to have those traits.
> Hopefully, your code helped someone. That's a good enough reason to do it.
No. That's like saying that the V2 rocket program helped keep a bunch of people out of the gas chambers.
We should absolutely do our best to make sure our work does more good than harm, not just that it does some good.
EDIT: I am sad to see your other comment below flagged/dead. HN does not like the idea that a lowly open source contributor could take their phones and computers away from them for petty things like genocide, murder or rape...
> 1. Correcting pandemic overhiring
> 2. In the ~2010-2022 timeframe, tech companies poured all this money into speculative bets
Any data/sources on which this might be based? The pandemic was 6 years ago; do these "Agile" (the tech term) companies really carry many unproductive lines-of-business for so long?
> speculative bets that never went anywhere ... think Amazon's Alexa devices division, Google Stadia, and perhaps most famously the Metaverse itself
Organizations make speculative bets all the time. Is there an accounting of the profitability of Alexa/Nest etc.?
> end of the ZIRP era would have caused companies to kill these inherently unprofitable projects
if you plug in the years 2020-2026 in the Fed Rate - Unemployment chart here at [1], it shows that from 2020 - 2022, rates were near zero while unemployment spiked during Covid and then fell. From 2022 through 2023, rates rose sharply while unemployment stayed relatively low. 2024-2025 the labor market softened. You can add the Federal Funds Effective Rate and the Unemployment Rate easily through the menu.
Unemployment stayed low through the rise in rates for almost two years prior to 2024. Given that companies operate on a quarterly reporting basis and program/project decisions are at least on that cadence, I don't think that the line you're suggesting that Rates-Go-Up -> Projects-Get-Killed -> Layoffs-Increase quite lines up with the economy-wide data in this exceptional case of 2022-2023.
We may have to look elsewhere for the reasons behind the current labor market weakness ... cough..*economy*..*trade walls*..cough...*structural re-alignment* [2]...cough...
[1] https://fred.stlouisfed.org/graph/?g=1duFv
[2] 6% employment decline in 22-25 year old workers https://digitaleconomy.stanford.edu/app/uploads/2025/11/Cana...
reply