Hacker Newsnew | past | comments | ask | show | jobs | submit | svnt's commentslogin

What is the argument for a duopoly when Kimi and Deepseek models are only months behind?

It’s a commodity in the making.


The argument is based on one of these companies hitting the singularity, making it impossible for any other company to catch up ever. I still think it's way more likely we'll see a typical S-curve where innovation starts to plateau. But even a small chance of it happening in the future is worth a lot of money today.

How does it follow that companies that are months apart will trip the singularity and this will prevent the others from doing so?

Who supplies the hardware for the singularity?


There's a massive thinking gap in this singularity thinking. We ARE the singularity. It has been exponential all the way back to the big bang. First the stars, the solar system, life, consciousness, language, computers, the internet. Yes it is speeding up and that is exciting, cause we are going to experience a lot in our lifetimes. We have a lot of exponential growth to go before progress becomes instant. There are physical limits, too. Power generation for example. I can't believe on what dumb shit people bet the world economy on.

That's certainly how it looks right now but where's the guarantee? What happens if it turns out that deep learning on its own can't achieve AGI but someone figures out a proprietary algorithm that can? That sort of thing. Metaphorically we're a bunch of tribesmen speculating about the future potential outcomes of the space race (ie the impacts, limits, and timeline of ASI).

Imagine such an AI exists. What good is AI that is so good that you cannot sell API access because it would help others to build equivalently powerful AI and compete with you?

If you gatekeep, you will not make back the money you invested. If you don't gatekeep, your competitors will use your model to build competing models.

I guess you can sell it to the Department of War.


> What good is AI that is so good that you cannot sell API access because it would help others to build equivalently powerful AI and compete with you?

Its awesome and world dominating, you just don’t sell access to that AI, you instead directly, by yourself, dominate any field that better AI provides a competitive advantage in as soon as you can afford to invest the capital to otherwise operate in that field, and you start with the fields where the lowest investment outside of your unmatchable AI provides the highest returns and, and plow the growing proceeds into investing in successive fields.

Obviously, it is even more awesome if you are a gigantic company with enormous cash to to throw around to start with when you develop the AI in question, since that lets you get the expanding domination operation going much quicker.


To dominate the real world, you need correcting feedback loop from reality. These feedback loops and regulations (in medical and other industries) take long time to come back with good signals. So you are still time bound by how fast your experiments are.

Yup. That doesn't really take a full-blown AGI on the path to ASI on the path to godhood - it'll take a bit better and more reliable LLM with a decent harness.

That's why I've been saying that the entire software industry is now living on borrowed time. It'll continue at the mercy of SOTA LLM operators, for as long as they prefer to extract rent from everyone for access to "cognition as a service". In the meantime, as the models (and harnesses) get better, the number of fields SOTA model owners could dominate overnight, continues to grow.

(One possible trigger would be the open models. As long as the gap between SOTA and open is constant or decreasing, there will be a point where SOTA operators might be forced to cannibalize the software industry by a third party with an open model and access to infra pulling the trigger first.)


Don't open models and competition between frontier providers both serve as barriers here? If a frontier provider pivoted as you describe it would certainly change the landscape but they wouldn't be unassailable without developing some sort of secret sauce that gave them an extremely large advantage over everyone else. They'd need a sufficient advantage to pull out far ahead of everyone else before others had a chance to react in a meaningful way. Otherwise the competitors that absorbed all your subscriptions would stack that much more hardware and continue to challenge you.

I think meaningful change to the current equilibrium would require at absolute minimum the proprietary equivalent of the development of the transformer architecture.


> If a frontier provider pivoted as you describe it would certainly change the landscape but they wouldn't be unassailable without developing some sort of secret sauce that gave them an extremely large advantage over everyone else.

Integration, and mindset. AI, by its general-purpose nature, subsumes software products. Most products today try to integrate AI inside, put it in a box and use to supercharge the product - whereas it's becoming obvious even for non-technical users, that AI is better on the outside, using the product for you. This gives the SOTA AI companies an advantage over everyone else - they're on the outside, and can assimilate products into their AI ecosystem - like the Borg collective, adding their distinctiveness to their own - and reaping outsized and compounding benefits from deep interoperability between the new capability and everything else the AI could already do.

Once one SOTA AI company starts this process, the way I see it, it's the end-game for the industry. The only players that can compete with it are the other SOTA AI companies - but this will just be another race, with nearly-equivalent offerings trading spots in benchmarks/userbase every other month - and that race starts with rapidly cannibalizing the entire software industry, as each provider wants to add new capabilities first, for a momentary advantage.

Once this process starts, I see no way for it to be stopped. Software products will stop being a thing.

Open models can't compete, because they're always lagging proprietary. What they do, however, is ensure the above happens - because if, for some reason SOTA AI companies stick to only supplying "digital smarts a service" for everyone, someone with access to sufficient compute infra is bound to eventually try the end-game strategy with an open model, hoping to get a big payday before SOTA companies respond in kind.

Either way, the way I see it, software industry as we know it is already living on borrowed time.


I don't understand where the unbeatable edge is supposed to come from here. Don't we already have this in the form of agents using tools? Right now it's CLI but it's not difficult to imagine extending that to a GUI coupled with OCR and image recognition in a way that generalizes.

So suppose ACo attempts to subsume Spotify or Photoshop or whatever. So they ... build their own competing platform internally? That's a lot of work. And now they what, attempt to drive users to it by virtue of it being a first party offering? Okay sure that's just your basic anticompetitive abuse of monopoly I guess. MS got in trouble for that but whatever let's assume that happens.

So now lots of ACo users are using a Photoshop competitor behind the scenes. I guess they purchased a subscription addon for that? And I guess ACo has the home team advantage here (anticompetitive and illegal ofc) but other than that why can't Photoshop compete? It just seems like business as usual to me. What am I missing?

If ACo sells widgets and I also sell widgets, assuming I can get attention from consumers and offer a compelling set of features for a competitive price why can't I get customers exactly? ACo's AI will be able to make use of either widget solution just fine assuming ACo doesn't intentionally sabotage me.

I think the more likely issue is that at some point the cost of building software falls far enough that it ceases to be a viable product category. You just ask an agent for a one off solution and it hands it to you.

Projecting out even farther, eventually the agents get good enough that you don't need to ask for software tools in the first place. You request X, the agent realizes that it needs a tool for that, builds the one off tool, uses it, returns X to you, and the ephemeral purpose built tool gets disposed of as part of the the session history. All of this without the end user ever realizing that a tool to do X was authored to begin with.

So I guess I agree with your end outcome but disagree about the mechanics and consequences of it.

> Open models can't compete

They can though. There's a gap, sure, but this isn't black and white. Plenty of open models are quite useful for a particular task right now.


One of the most valuable software products in the world is Instagram. Tens of billions of revenue annually.

Any of Meta’s competitors could reproduce Instagram “the software” in every meaningful detail for (let’s say) $100M.

They still don’t have Instagram the product. Reducing that outlay to a few billion tokens doesn’t change that.

I guess I’ll believe this theory when Anthropic or OpenAI rolls out a search engine with an integrated ad platform that can meaningfully compete with Google. How hard can that be?


It's not clear to me that one horse-sized AI allows you to outcompete 100 duck-sized AIs in use by everyone else once you factor in the non-intelligence contributions that the others with weaker AIs bring to the table.

There's a lot more to building a successful product than how smart your engineers/agents are, how many engineers/agents you have, and capital.

Google, for example, can be extremely dysfunctional at launching new products despite unimaginably vast resources. They often lack intangible elements to success, such as empathizing with your customers' needs.

If we were in a world where AI was not already widespread, then I would agree that having strong AI would be an immense competitive advantage. However, in a world where "good enough" AI is increasingly widespread, the competitive advantage of strong AI diminishes as time goes on.


> Imagine such an AI exists. What good is AI that is so good that you cannot sell API access because it would help others to build equivalently powerful AI and compete with you?

At this point, if you can no longer safely drip-feed industry the access to "thinking as a service" and rake in rent, you start using it, displacing existing players in segment after segment until you kill the entire software industry.

That's pre-ASI and entirely distinct from the AI itself becoming so good it takes over.


If you assume the status quo - a powerful not quite human level AI - then you are most likely correct. However one of the primary winner takes all hypotheticals (and to be sure it remains nothing more than a wild hypothetical at this point) is achieving and managing to control proprietary ASI. Approximately, constructing something that vaguely resembles a god.

Being unfathomably smarter than the people making use of it you could simply instruct it not to reveal information that would enable a potential competitor to construct an equivalent. No need to worry about competition; you can quite literally take over the world at that point.

Not that I think it's likely such a system will so easily come to pass, nor that I think humanity could manage to maintain control over such a system for long. But we're talking about investments to hedge against existential tail risks here so "within the realm of plausibility" is sufficient.


They're months behind now and have very low market share, so as long as they stay months behind the duopoly/triopoly can hold.

This depends on a fantasy cascade of functional consequences of AGI, whatever that acronym even means anymore.

It is just cargo cult financing at this point.


$40B is not anywhere near half of Anthropic at this point. You do get the same access as nvidia, aws, and other investors, which has value.

Panic -> response distribution shrinks -> freeze/be angry/make social mistake, but hey it’s fast

You: wouldn’t it be more adaptive if you didn’t do this?

Millions of years of mammalian evolution, unevenly distributed in homo sapiens: No


You can blame million years of evolution for your bad life or you can change it right now living in the present moment. It’s fine if you don’t do it right now because later at a future present moment you can still make the choice to be happy. It might take some work but it will never be because of something that happened in the past. It will be something that you do right now. There are no exceptions or escape hatches

These cliches are just annoying to read at this point, everyone has heard this stuff a million times and yet...millions still suffer. If I'm being honest it just comes across as yet another form of bullying when socially well adjusted people say stuff like this to people worse off than them.

What they are saying is true. It just might take a lot of work to get the ball moving.

I can agree with you while still agreeing with parent poster that it's basically "git gud"-tier bullying.

Very very few orators can successfully pull off "just fix your problems bro" as anything beyond a generic kick in the pants for the people presently predispositioned to be motivated by one.


I regularly bully my close friends into being better people. It just so happens that I fell down the staircase of life much earlier than a lot of people do. I had to do most of my “midlife crisis” thinking in my early 20s because most of my family died and I had to come out as gay without any support.

Now that I’m in my 30s I have the joy of helping my friends along on this journey called life. Sometimes people just need a gentle nudge up the staircase. Sometimes they need to be carried against their will


I'm trying to figure out how to manage a similar situation.

It's like your friends wanna party raid but they keep going in with incomplete builds

I only got so much patience before I find a new guild


I agree it can feel frustrating and inactionable but it's not bullying, it's a thoughtful well-meaning response. Actually if it makes you feel bad it's a signal it may be worth contemplating more.

It may be well-meaning, but it is clearly not thoughtful.

They aren't saying this tho, they are saying to go to therapy to help solve it.

That approach doesn't work for everyone. Everything you say could be correct, but if the person thinks their feelings are not being listened to, there is a chance they still won't take your advice.

One of my therapists said it was normal in her circle for people not to get onto someone's case if they're mentally unwell and have chores piling up, because it makes sense they don't have as much effort to give to all aspects of life. At the time I didn't understand this statement, because up until then my only contacts were people who, although they didn't go as far as "bullying" me into compliance, had told me in effect that how I felt about my life was irrelevant to whether or not I was fulfilling every single one of my adult responsibilities. What ultimately worked for me wasn't those contacts who said there were no excuses, but my therapist who decided not to frame my decisions in terms of "excuses".

For me this kind of thing hurts because:

1. There's not any room for compassion or slack. I'm not talking about people who take advantage of others' goodwill. Even if you try to help with this "no excuses" mentality, the other person could start to worry if the next inadvertent slip-up or setback counts as an "excuse" they'll be looked down upon for. This kind of thought will linger and reduce the effectiveness of the intervention.

2. Your feelings aren't listened to, or if they are it's only at a level superficial enough to obtain compliance. This is bad enough on its own. What might not be obvious is if the person has had a life marked by repeated instances of their feelings being shut down or not listened to, especially in childhood, this approach only backfires that much harder. These are emotional patterns that have been established in critical periods/over a long period of time that are being relieved at a much higher intensity than the average population. And most importantly, you can't know for sure if something like this applies until you get to know the person better, which is why a lot of one-off prescriptive advice towards strangers is ineffective.

3. The advice-giver is often successful/came out of hardship themselves, so by being looked down upon as irresponsible it gives the impression that you're being excluded from the in-group of mentally well/recovered people. Avoiding exclusion from a group is one of the biggest sources of strife today, as modern politics and social media indicate. And being mentally stable is often one of the most important groups to be included in for people who know they're depressed, so it hurts even more.


That’s all excuses. I’m not saying it’s right to bully someone who’s in the depths of depression. But the depression isn’t gonna fix itself and it certainly won’t fix itself because of something that happened in the past

i don't know what it takes to get out of depression, but "it isn't going to fix itself" doesn't contradict that the depressed person can't get out of it on their own. it's like telling someone stuck in a hole to stop whining because they are not going to get out of the hole as long as they do nothing. that's true, but they are also not in a position to see a way out, or may simply not be able to get out without help.

as i said, i don't know what it takes, but i do think that compassion, patience, and recognition of efforts and absence of any hint of blame by others are part of it.


I don't know what to say then, except I'm going to keep working with clinicians who say otherwise.

All of this is integral to me working with my current therapist, so I don't see what it has to do with depression not fixing itself.


Yeah I don’t disagree, but your approach comes off as uncaring and arrogant.

It’s not my job to care about yourself. It’s your job you care about yourself

I see, so you are saying you commenting on the person who was struggling was only about your superiority all along?

Your response assumes a lot about the homogeneity of subjective human experience that the data don’t seem to support.

There is a diversity of physical attractiveness, innate and learned social grace, social environment, and phenotypic variability in psychosocial capacity that makes your comment sound extremely out of touch to some people.

I can do what you describe because I am fortunate that many of my social interactions are positive. For people I work with this is not the case and they are extremely socially isolated, and the tragedy is that every mistake they make compounds this. They are more sensitive interpersonally than I am and more socially aware in the moment, while less equipped to deal with social conventions and unattractive, becoming dramatically moreso in social situations due to their intrinsic reactions.

The points in the article can help all of us.


> and the tragedy is that every mistake they make compounds this

This is correct and I'm convinced there comes a point where there's no way out. The vast majority of social experiences in my life have been negative and it gets worse every time I have another, making it less likely the next will be positive.

Rather than continue to get hurt I have nearly 100% socially isolated myself, save for the internet. I work remote in a rural area and I only leave the house for essentials. There's no place for me socially and I've accepted that.


> This is correct and I'm convinced there comes a point where there's no way out.

My friend, things can always improve. Having mental health problems is hard, because you're ultimately using your own 'impaired' brain to analyze your own situation. Talking to a therapist is effective in breaking this, because it forces you to organize your thoughts into something coherent to explain it to your therapist. Only at this point will flaws in this reasoning become apparent.

If you cannot talk to a therapist (or otherwise a neutral person who doesn't judge you for what you say), you can try writing it down. Try to write down why you feel what you feel, what you feel when you talk to another person, what you think that others think and feel about you, how those feelings developed, how other people have been influential in your feelings, everything. Read it as if someone else wrote it down. What would you do in their situation? Do you agree with what you wrote down. If you come across holes in what you've written, try to revise that part, rewrite it to incoorporate for the criticisms.

> making it less likely the next will be positive.

Why do you think that's the case? If you throw a dice and it comes up on 1 three times in a row, that doesn't make it more likely that the next time it will be a 1 again. There's so many different people, it's as good as random what kind of interaction you will have.


Some people are just happiest being alone. I usually feel I am in this camp. I can have reasonably good social interactions but it's often awkward and even when it's not it is a lot of work and it doesn't seem commensurate with the reward, which is very little.

I like staying at home, reading, tinkering, doing my hobbies. I do not crave the company of others, and walking into a room and having to be "on" even with people I know and am friendly with is so draining.


I’m also in this camp. There’s nothing better than to be lost in your own flow. However, I find these moments to be richer when someone is silently tinkering besides you, in sort of a passive interaction. Typical people tend to behave awkwardly when there’s no point or reason in talking while in the company of others. This has to be as much as a deficit as the normative definition of social awkwardness. I could never connect with these kind of people, that are always ruining silence for no reason other than trying to escape their own discomfort.

The point is that a fully grown person (i.e. adult) should be able to regulate their emotions to the point of being able to have a conversation with 3 strangers.

You might not like it, it might stress you out a bunch, you can cry afterwards, or have a stiff drink after, but you should be able to set those emotions aside for 30 minutes, especially for something important like a job interview.

If someone cannot do that, they should definitely go into therapy for that. No matter if it was 'done to them', it's still a problem that person carries around, and the only way around that is fixing it.


lol, go be yourself on your own time. On my time, you better be normal and happy about it.

None of the many many reasons someone may act this way mean they are broken, and therapy is not about 'fixing' someone to be the member of society you deem appropriate.


Therapy is (or at least can be!) about trying to achieve goals that you have. I’m the GP commenter above. I went to therapy twice a week for two years to get over social anxiety and my entire life has completely opened up in a new way that would never have been possible without that work.

If relating to people is not a goal of yours then I would agree that you should not go to therapy for it. On the other hand, it is difficult for me to believe that anyone with anxiety is truly comfortable, considering that discomfort is the main feature of anxiety.


It is far more helpful to others for you to share the depths of your experience than to go around telling people they need to go to therapy because it works for you.

I see the enthusiasm and that you want better things for others, but the way you are approaching this communication is not doing it justice.


Nobody knows who anybody is in these comments. It's impossible to tailor our comments to people who might read it.

Awesome of you to put words in my mouth. I don't think people are 'broken' for having mental issues, and even I certainly would never imply that someone is somehow 'less' because of mental issues.

Just as someone with a broken leg is not a 'broken' person, their leg still needs fixing.

just fyi: 2 people could have the same mental health issues, but one could get a diagnosis and the other one doesn't. The reason for that is because a 'diagnosis' is basically just a ticket to get treatment, and thus is solely based on the question: "Will this person be able to deal with the disruptions caused by the issue, without professional intervention?".

If someone has a panic attack every time they talk to 3 strangers, it's is very plausible that this can lead to difficulty making and maintaining friendships and relations, which can likely lead to loneliness, depression, even further excerbated social anxiety, etc. All these afflictions make it even harder to deal with these issues which is why some people cannot break this cycle by themselves.


Some of the people I work with have gone into therapy. The more intelligent and in touch with their emotions they already were, the less therapy did for them. For a lot of dudes it is a revelation. For a lot of others it is not, just a way to continue to surface intractable problems in conversation.

Therapy doesn’t always help, many people need more compassion from those around them. And society would be better equipped to provide that if instead to referring their contacts to specialists they might not be able to afford, more well-off people developed some minor therapeutic ability and concern for their fellow humans.


What is this obsession with therapy? There is no solid evidence it works yet it is relentlessly recommended.

Yes there is

No there isn’t.

There's plenty of research that shows treatment along with talk therapy yields better outcomes than treatment or therapy alone. If you have any evidence or anecdotes that are to the contrary I would love to hear it.

The burden of proof isn’t with me. I am not the one saying therapy works.

> No matter if it was 'done to them',

Love the quote marks. Next time try a Marx quote. I mean the brothers.

To fellow humans reading: the point is that the ones who did this to you are extremely unlikely to repent. Or even to comprehend that what they did to you is wrong.

Even if you were to explicitly hurt yourself - or place yourself in a position where you get hurt very badly - with the intent to communicate "do you still not see what you did to me?"... it's just no sweat off their, you know? "Yeah that person was all wrong, had it coming anyway".

The social contract protects them better than it protects you, so an "eye for an eye" solution is also unlikely to work - or even be possible: we don't hit, do we?

Therapy is... some person's job. That they trained for, you know? To put some food on the table, you know?

That means you can "go to therapy" in good faith (assuming you can access it in the first place) and not heal at all. The therapist might be a talented and intrinsically motivated person - or might just go "mmhmm" as you try to get through to them that they are doing exactly nothing to help you heal from some very particular, and perhaps not even unclearly defined at all, mental wound (that PP has had the gall to put in 'scare quotes'.)

Point is, the therapist will get paid either way. There is no shortage of people being told to get therapy by their fellows (who are too fucked up themselves to exhibit basic human fellowship). The systemic incentive to heal people's minds is next to nonexistent in comparison with the systemic incentive to drive hurt people mad, and then destroy them for being mad.

My suggestion: read some fucking books, and I don't mean books about fucking, I mean fucking books. Then, you might begin to get a clue how to get in touch with your spite, and how to become the undoing of all that ever wronged you without turning into that thing in the process.

TL;DR: You can start with those people who taught you that "feeling sorry for yourself" is a thing, and that it's what you need to do to make those who wronged you to regret their actions. You take those people and unlearn everything that they ever taught you. If there was anything true at all in what they wanted you to understand, you'll relearn it on your own, unencumbered by association with their other insidious lies. Then you can go tell two priestly kings that the balamatom sez hi ;-)


Sadly the human need for being heard and understood is innate, and it has been my experience that books can't substitute for that need. On the other hand, there are swathes of incompetent therapists that can only aggravate one's mental state.

The only solution I see is to find the right therapist. Some people might not when their future depends on them finding one, and they give up too early. I can't see how that would be fixed except maybe having a mediator that pairs you up with therapists they recommend and asks if you feel an improvement each week. You'd be surprised, but I had nobody to do this for me. So I ended up losing years worth of time sticking with incompetent therapists because "going to therapy" like everybody told me to seemed more important than "fixing my life."

As cruel as it sounds, I was in no position to think critically about my own treatment because my mental state only allowed me to see checking off the box of self-care to get people off my back as the ultimate goal. It's the nature of the problem of mental healthcare. If I had been given a simple questionnaire to rate my treatment providers on a scale of 1-10 in various dimensions, I would have been put in front of someone else within a month or two.


You know who's infinitely patient, has read every psychology text book and is available immediately at 2am and not in a week that you have to schedule an appointment for? ChatGPT. (or Claude or any of them.)

Despite popular opinion having a sycophantic therapist trained Above all else to be liked by you is actually not good

I was gonna bet on "the police" but "having read books" kinda disqualified that

A therapist does more than just listen. A therapist is more like a driving instructor sitting in the second seat that points out things that you should pay attention to, and can take the wheel if you head into dangerous territory.

If you say something like "I hate that people don't see the real me", LLMs would say "yes it's understandable that that would make you upset" basically confirming your reasoning as valid, while a therapist would ask "why do you want to people to see the real you?" or "What is in your words the difference between how people see you now, and how they would see you if they saw the 'real you'?". These kinds of questions force you to explain and identify your assumptions and reasoning.

LLMS are more like friends, providing a listening ear, but otherwise just nodding along.

edit: To be clear, this is why llms are NOT a good replacement for therapy. Using llms will likely only exacerbate instead of mitigate.


>Sadly the human need for being heard and understood is innate

And humans are hell-bent on denying this to each other. Just like sustenance or shelter. Hmm. Wonder what's that all about?

>You'd be surprised

The hypothetical everyman that is addressee in this turn of phrase? Yeah, probably would. Me though? I wouldn't even feign it.

>but I had nobody to do this for me.

Root of the problem right there. Not your fault. (At least if we reason causally, and not scapegoatingly.)

>So I ended up losing years worth of time sticking with incompetent therapists because "going to therapy" like everybody told me to seemed more important than "fixing my life."

Exactly.

Sending someone to therapy is a socially acceptable accountability sink. And a "good vibes"-coded method of gaslighting.

The sender-to-therapy still wants to maintain your acquaintance. They might not even be getting something out of it, or even expecting to gain something; they just want to do the normal thing like they're taught to; which amounts to "do not be seen looking like you're snubbing somebody because dats rood".

And, simultaneously, they don't actually want the cognitive load of acknowledging you as a real person in a real pickle, so they can't "be there for you" (another treacherous wording). After all, reality is a contagious thing; what's next - they become aware of their own shit? Unthinkable - what if that makes them incapable of traumatizing their kids one day? Better just do the normal thing and let you rot. It's all upside!

It's narcissism all the way down, through the bottom, and up by the bootstraps.

(See also cousin post:

>LLMS are more like friends, providing a listening ear, but otherwise just nodding along.

If that's the standard of friendship, it's more useful to make enemies!)

> If I had been given a simple questionnaire to rate my treatment providers on a scale of 1-10 in various dimensions, I would have been put in front of someone else within a month or two.

And then those poor psych grads would've been denied their lucrative and inconsequential careers! The horror, the enormity!

>It's the nature of the problem of mental healthcare

Mental healthcare is impossible without actual concepts of "mind", "health", and "care". The society we inhabit only has some poor statistical approximations of those, Seeing like a State-style. Best "we" can do, therapy-wise, is figure out how to make you scream less loudly.

>As cruel as it sounds, I was in no position to think critically about my own treatment

It does not sound cruel. You are not hurting anybody. You are being critical of your past self. This is, generally speaking, a correct thing to do.

>because my mental state only allowed me to see checking off the box of self-care to get people off my back as the ultimate goal.

Your mental state does not exist in a vacuum; it is primarily a product of your environment. If they teach you box ticking, you're gonna do box ticking. If they misteach you that box ticking appeases, you're gonna keep ticking boxes until it appeases - except it won't and while you're busy waiting for it to appease them, they will do whatever the fuck they want with you. It's their way of life. Who are we to deny them that? How?

Of course, if you've found a therapist that works for you, all of this is probably moot; as to other readers, my suggestion continues to be as follows:

- Begin with rejecting any premise they're trying to force/shame/blackmail you into accepting, no matter how socially acceptable this premise might seem on the surface.

- Then, proceed to deconstruct the premise and its implications from a maximally cynical perspective. This will simplify things to a level where one is able to reason about them even with most higher faculties disabled.

- Once you've used this to regain higher ground (a process which, in itself, is already a source of valuable first-hand experiences), you can commence the actual "debugging" of your higher faculties (and, through that, figure out those things only you can figure out).


Honestly friend, I think you have some very cynical views about relationships, that aren't very healthy.

All this talk about retribution, what would you think you will get out of this? What do you think would happen if all your bullies would call you and tell you they're sorry for what they did to you? Would that erase any of the memories you have? Will that suddenly make you approach each social interaction without worry and with confidence? Will that help improve how you feel about your self?

I think the answer is a resounding 'no'... The required changes between you and your 'best self' are not within them, but they are within you.

> that it's what you need to do to make those who wronged you to regret their actions

Why does your goal even include these people? Why do you keep letting these people play a part in your life? Your goal should be to live however you want to live, and to disregard these people


Because, "friend", I do not only care about myself.

> There is a diversity of physical attractiveness, innate and learned social grace, social environment, and phenotypic variability in psychosocial capacity

I say this with respect: the kind of attitude you're describing does more to isolate people than anything mentioned in the original post.

Bitterness or even just muted disappointment will drive people away more than any of the factors you mentioned, by a factor of 10. Have any of you gone on a date with someone who looked great on paper, but seemed unhappy to be there or resentful towards you? That's the ultimate connection killer.

You can have all sorts of setbacks, but if you're chill and have a good attitude people will want you around (barring a few assholes, but it's important not to worry about them). OTOH even if you're very good looking, no one will want to approach you if your vibes are bad or inward facing.


Respect for developmental diversity does more to isolate people?

Because it seems like you and several other people are projecting a lot of “trauma is my identity” ideas on me that aren’t in what I wrote.

What I wrote is that telling people “get good, I did” is really unhelpful. Put more work and thought into how you try to connect with people whose experience is very different from yours.


Why do you assume my experience is so different? There are tons of people on forums like these who've dealt with extreme shyness and severe problems, yet managed to persevere. Your struggles might not be nearly as unique as you think.

I am assuming this because you are projecting all over me and not distinguishing between me and the people I was making the point about. I was pretty clear in my comment that I do not struggle with shyness. Some people experience debilitating levels of shyness, and some people have done the work necessary to understand the perspective of those people, but in my experience they do not communicate like you do.

I have no idea what you're trying to say.

>> Why do you assume my experience is so different? There are tons of people on forums like these who've dealt with extreme shyness and severe problems, yet managed to persevere.

> I am assuming this because you are projecting all over me

Projecting means you are making assumptions rooted in your own experience about what I think and how I feel which are not accurate.

>> Your struggles might not be nearly as unique as you think.

> and not distinguishing between me and the people I was making the point about. I was pretty clear in my comment that I do not struggle with shyness.

This means you are conflating me with the people I work with who struggle with this — ie you did not take the time to understand my comment chain before replying.

> Some people experience debilitating levels of shyness, and some people have done the work necessary to understand the perspective of those people, but in my experience they do not communicate like you do.

This means you appear to be functionally illiterate in the language of subjective experience and are just insisting that other people experience the world the way you do. This is understandable as a default because for many it is a familiar, easier, model of reality to work with. People exist who have the same fluency in this area that you have in your primary area of expertise. Think about the gap in understanding between someone who knows nothing about your area of expertise and you. Think about how they sound trying to explain to you how to solve a problem in your work.


No, I know what all that means. I don't know what your perspective is, beyond your defenses.

Good luck all-in.

But seriously what are you doing that this works? I guess if you are writing pop culture articles this might work.

For anything where the output has consequences I can’t imagine finding success like this.


The latent space knowledge that the models have is stronger than the inference agent going out and trying to find information to integrate into context.

If you ask why the sky is blue, the model already has the answer. It's corrosive to your conversation to pull a bunch of unknown sources into context so the model can appease your "feels right" request.

If you don't trust the answer, your brain is still way way better at quickly scanning sources to verify the answer.

But the fact of the matter is that these models went from stumbling over "9 + 7 =" three years ago to solving erdos problems today. And benchmarks (that are so saturated we don't even both with them anymore) reveal that the models basically all have total encyclopedic knowledge of every major career field. Which also makes sense because the labs have been purposely drilling hard on building pristine datasets of all this knowledge.

I would challenge you to find one firmly established general academic question that a SOTA model gets wrong. Good luck.


I use it claude and gemini all the time and they get more advanced theory, motivation, and history wrong all the time.

If you aren’t seeing the errors it is because you are in some really mainstream conversations or because you don’t know what they are saying that is wrong.

This is trivial to demonstrate to yourself for any nontrivial project. A single academic question is easy to get the right answer for. That is not the dominant AI use case for most product people or engineers.


This seems almost completely untrue?

The new models have engines that are smaller turbos, that part is true — but they get >30% better fuel economy, and they output more power.

The reliability might become an issue down the road especially in hybrid engines but the data so far don’t seem to support your assertions. The one exception is maybe the Tundra 3.4L but that seems to still be ambiguous as to the root cause, and may just be mfg process error.


I wonder if this notion comes from the 80s, when engines with turbos had lower compression ratios for reliability. Today's turbocharged motors have higher compression ratios than in the malaise era, and the turbos have a lot less lag. Turbos no longer mean you have to sacrifice fuel economy for performance (unless you have a lead foot).

>Turbos no longer mean you have to sacrifice fuel economy for performance (unless you have a lead foot).

That's incorrect. Virtually every turbo'd gas car runs slightly richer than stoich to use the unburnt fuel to manage temp/knock. Diesels, you actually get more efficiency out of with a turbo for free. With gas you're practically guaranteed to be throwing fuel out the pipe.


That isn't some turbo specialty, the effect is the same in both NA and turbo engines. And AFAIK it isn't really feasible anymore. I don't know about other manufacturers, but for example Volkswagen Group's EA211 EVO2 engines run pinned at lambda 1 no matter what.

All I know is my last turbo'd vehicle was always running at 13.8, and that was a 2013 Nissan with a turbo'd L4, and it annoyed the piss out of me. Pretty much guaranteed only getting 26 MPG at highway speeds. This was despite claims in the manual saying the AFR was fuel octane dependent & would automatically vary (which I found out through experimentation was full of shit). It just stayed pinned to 13.8 whether you ran 87 or 91.

Nope, just engineering to do not much more for warranty. Turbo engines arent inherently unreliable (tho you might need to replace the turbo itself every 100-200k so still more expensive to maintain), just need to build extra strong block and components if you want it to run for a long time.

And why would company do that if that would put it far over warranty period?


This is what toyota marketing says

Toyota marketing says that they're selling you a worse engine?

His point is that in x86 there is no performance difference but everyone except his colleague/friend uses xor, while sub actually leaves cleaner flags behind. So he suspects its some kind of social convention selected at random and then propagated via spurious arguments in support (or that it “looks cooler” as a bit of a term of art).

It could also be as a result of most people working in assembly being aware of the properties of logic gates, so they carry the understanding that under the hood it might somehow be better.


GP seems to think it strange that "x86" would actually not have a performance difference here.

I think this might just be due to not realizing just how far back in CPU history this goes.


In a clockless cpu design you'd indeed expect xor to be faster. But in a regular CPU with a clock you either waste a bit of xor performance by making xor and sub both take the same number of ticks, or you speed up the clock enough that the speed difference between xor and sub justifies sub being at least a full tick slower

The former just seems way more practical


Even if they take the same number of ticks, shouldn't xor fundamentally needing less work also mean it can be performed while drawing less power/heating less, which is just as much an improvement in the long run?

That wasn’t much of a concern in the 70s and 80s.

Also, you probably spend much more energy moving the bits around the chip and out to RAM than you do on the actual calculation.

I think an even more likely explanation would be that x86 assembly programmers often were, or learned from other-architecture assembly programmers. Maybe there's a place where it makes more sense and it can be so attributed. 6502 and 68k being first places I would look at.

For 68k depending on the size you're interested in then it mostly doesn't matter.

.b and .w -> clr eor sub are all identical

for .l moveq #0 is the winner


6502 doesn't even have register-to-register ALU operations, there's no alternative to LDA #0.

8080/Z80 is probably where XOR A got a lead over SUB A, but they are also the same number of cycles.


They are different observations, I think, though the phrasing confuses it:

a) cost per successful task is rising — eg claude max allocation is functionally shrinking

b) is there enough potential cost reduction in the queue to make up the gap

c) if open models converge on a more efficient but slightly-less capable point (which has effectively happened) what is the actual moat?


Yes, cost per successful task is rising - ie, we are all paying effectively more for AI.

And yet - Anthropic is still struggling to have enough capacity to serve demand - they are virtually sold out.

And yes, are almost-as-good open models, on part with the closed models from 6 months ago (at worst), that are just a single Openrouter API call away, and yet Anthropic is still selling out. So people are paying for the premium product anyway, for whatever reason - maybe the last bit of intelligence is worth it, maybe they like the harnesses/products around the models, maybe it's a brand/enterprise sales thing.

Put aside your feelings about the AI industry and imagine we are talking about thingamajigs. Prices for thingamajigs are going up. They are still selling out about as fast (or faster) than the company selling them can build factories. There are more cost-effective competitors already in the market, but thingamajigs are selling out anyway.

Would you, looking at the thingamajig industry, conclude the "jig is almost up"? That "the returns aren’t anywhere close to what investors expect" and that the impending IPO is all some desperate hail mary to save things before the collapse?


I don’t have feelings about the AI industry to put aside. I would not have sufficient information to assess whether thingamajigs are legitimately valuable or whether they are tulips. The only indicator I see is the last point about people using it in the short term despite having access to cost effective alternatives, which actually points to irrationality/FOMO more than legitimate value.

What we are looking at looks to me like it is rapidly becoming a a commodity: it will become as existential as electricity and water to businesses, and it will be sold and marketed and regulated, more or less like a utility.


Nice em-dash there bro

Thanks I am the source. em-dashing since 1997

It is meaningless when what you sell costs more than what your customer pays for it.

I could sell $100B of GPUs at 90% of their cost tomorrow and I have market acceptance.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: