Hacker Newsnew | past | comments | ask | show | jobs | submit | xpe's commentslogin

I also have the same question. That said, for some problems, at least over the last week or so, I did sometimes get better results from lower-effort Opus or even Sonnet. Sometimes I get (admittedly this is by feels) a better experience from voice mode which uses Haiku. This is somewhat surprising in some ways but maybe not in others. Some possible explanations include: (a) bugs relating to Anthropic's recent post-mortem [1] or (b) a tendency for a more loquacious Claude to get off in the weeds rather than offering a concise answer which invite short back-and-forth conversations and iteration.

[1]: https://www.anthropic.com/engineering/april-23-postmortem ... but also see the September 2025 one at https://www.anthropic.com/engineering/a-postmortem-of-three-...


I very much value and appreciate the first four paragraphs! [3] This is my favorite kind of communication in a social setting like this: it reads more like anthropology and less like judgment or overgeneralization.

The last two paragraphs, however, show what happens when people start trying to use inductive reasoning -- and that part is really hard: ...

> Therefore I need more time and effort with Gen AI than I needed before because I need to read a lot of code, understand it and ensure it adheres to what mental model I have.

I don't disagree that the above is reasonable to say. But it isn't all -- not even enough -- about what needs to be said. The rate of change is high, the amount of adaptation required is hard. This in a nutshell is why asking humans to adapt to AI is going to feel harder and harder. I'm not criticizing people for feeling this. But I am criticizing the one-sided-logic people often reach for.

We have a range of options in front of us:

    A. sharing our experience with others
    B. adapting
    C. voting with your feet (cancelling a subscription)
    D. building alternatives to compete
    E. organizing at various levels to push back
    
(A) might start by sounding like venting. Done well it progresses into clearer understanding and hopefully even community building towards action plans: [1]

> Hence Gen AI at this price point which Anthropic offers is a net negative for me because I am not vibe coding, I'm building real software that real humans depend upon and my users deserve better attention and focus from me hence I'll be cancelling my subscription shortly.

The above quote is only valid unless some pretty strict (implausible) assumptions: (1) "GenAI" is a valid generalization for what is happening here; (2) Person cannot learn and adapt; (2) The technology won't get better.

[1]: I'm at heart more of a "let's improve the world" kind of person than "I want to build cool stuff" kind of person. This probably causes some disconnect in some interactions here. I think some people primarily have other motives.

Some people cancel their subscriptions and kind of assume "the market and public pushback will solve this". The market's reaction might be too slow or too slight to actually help much. Some people put blind faith into markets helping people on some particular time scales. This level of blind faith reminds me of Parable of the Drowning Man. In particular, markets often send pretty good signals that mean, more or less, "you need to save yourself, I'm just doing my thing." Markets are useful coordinating mechanisms in the aggregate when functioning well. One of the best ways to use them is to say "I don't have enough of a cushion or enough skills to survive what the market is coordinating" so I need a Plan B!

Some people go further and claim markets are moral by virtue of their principles; this becomes moral philosophy, and I think that kind of moral philosophy is usually moral confusion. Broadly speaking, in practice, morality is a complex human aspiration. We probably should not not abdicate our moral responsibilities and delegate them to markets any more than we would say "Don't worry, people who need significant vision correction (or other barrier to modern life)... evolution will 'take care' of you."

One subscription cancellation is a start (if you actually have better alternative and that alternative being better off for the world ... which is debatable given the current set of alternatives!)

Talking about it, i.e. here on HN might one place to start. But HN is also kind of a "where frustration turns into entertainment, not action" kind of place, unfortunately. Voting is cheap. Karma sometimes feels like a measure of conformance than quality thinking. I often feel like I am doing better when I write thoughtfully and still get downvotes -- maybe it means I got some people out of their comfort zone.

Here's what I try to do (but fail often): Do the root cause analysis, vent if you need to, and then think about what is needed to really fix it.

[2]: https://en.wikipedia.org/wiki/Parable_of_the_drowning_man

[3]: The first four are:

    I write detailed specs. Multifile with example code. In markdown.

    Then hand over to Claude Sonnet.

    With hard requirements listed, I found out that the generated code missed requirements, had duplicate code or even unnecessary code wrangling data (mapping objects into new objects of narrower types when won't be needed) along with tests that fake and work around to pass.

    So turns out that I'm not writing code but I'm reading lots of code.

People come at this with all kinds of life experience. The above notion of trust to me is quaint and simplistic. I suggest another way to frame trust as a more open ended question:

    To what degree do I predict another person/org will give me what I need and why?
This shifts "trust" away from all or nothing and it gets me thinking about things like "what are the moving parts?" and "what are the incentives" and "what is my plan B?".

In my life experience, looking back, when I've found myself swinging from "high trust" to "low trust" the change was usually rooted in my expectations; it was usually rooted in me having a naive understanding of the world that was rudely shattered.

Will you force trust to be a bit? Or can you admit a probability distribution? Bits (true/false or yes/no or trust/don't trust) thrash wildly. Bayesians update incrementally: this is (a) more pleasant; (b) more correct; (c) more curious; (d) easier to compare notes with others.


I know some people use the word "gaslighting" in connection with Anthropic. I've read some of those threads here, and some on Reddit, but I don't put much stock in them. To step back, hopefully reasonable people can start here:

    1. Degraded service sucks.
    2. Anthropic not saying i.e. "we're not seeing it" sucks.
    3. Not getting a fix when you want it sucks.
Try to understand what I mean when I say none of the above meet the following sense of gaslighting: "Gaslighting is the manipulation of someone into questioning their perception of reality." Emphasis on understand what I mean. This says it well: [1].

If you can point me to an official communication from Anthropic where they say "User <so and so> is not actually seeing degraded performance" when Anthropic knows otherwise that would clearly be gaslighting -- intent matters by my book.

But if their instrumentation was bad and they were genuinely reporting what they could see, that doesn't cross into gaslighting by my book. But I have a tendency to think carefully about ethical definitions. Some people just grab a word off the shelf with a negative valence and run with it: I don't put much stock in what those people say. Words are cheap. Good ethical reasoning is hard and valuable.

It's fine if you have a different definition of "gaslighting". Just remember that some of us have been actually gaslight by people, so we prefer to save the word for situations where the original definition applies. People like us are not opposed to being disappointed, upset, or angry at Anthropic, but we have certain epistemic standards that we don't toss out when an important tool fails to meet our expectations and the company behind it doesn't recognize it soon enough.

[1]: https://www.reddit.com/r/TwoXChromosomes/comments/tep32v/can...


To my eye, gaslighting is a serious accusation. Wikipedia's first line matches how I think of it: "Gaslighting is the manipulation of someone into questioning their perception of reality."

Did I miss something? I'm only looking at primary sources to start. Not Reddit. Not The Register. Official company communications.

Did Anthropic tell users i.e. "you are wrong, your experience is not worse."? If so, that would reach the bar of gaslighting, as I understand it (and I'm not alone). If you have a different understanding, please share what it is so I understand what you mean.


I'd rather not speak too poorly of Anthropic, because - to the extent I can bring myself to like a tech company - I like Anthropic.

That said, the copy uses "we never intentionally degrade our models" to mean something like "we never degrade one facet of our models unless it improves some other facet of our models". This is a cop out, because it is what users suspected and complained about. What users want - regardless of whether it is realistic to expect - is for Anthropic to buy even more compute than Anthropic already does, so that the models remain equally smart even if the service demand increases.


It seems to me you dropped the "gaslighting" claim without owning it. I personally find this frustrating. I prefer when people own up to their mistakes. Like many people, to me, "gaslighting" is just not a term you throw around lightly. Then you shifted to "cop out". (This feels like the motte and bailey.) But I don't think "cop out" is a phrase that works either...

Some terms:... The model is the thing that runs inference. Claude Code is not a model, it is harness. To summarize Anthropic's recent retrospective, their technical mistakes were about the harness.

I'm not here to 'defend' Anthropic's mistakes. They messed up technically. And their communication could have been better. But they didn't gaslight. And on balance, I don't see net evidence that they've "copped out" (by which I mean mischaracterized what happened). I see more evidence of the opposite. I could be wrong about any of this, but I'm here to talk about it in the clearest, best way I can. If anyone wants to point to primary sources, I'll read them.

I want more people to actually spend a few minutes and actually give the explanation offered by Anthropic a try. What if isolating the problems was hard to figure out? We all know hindsight is 20/20 and yet people still armchair quarterback.

At the risk of sounding preachy, I'm here to say "people, we need to do better". Hacker News is a special place, but we lose it a little bit every time we don't in a quality effort.


Fair enough. If the comments in question were still editable, I would be happy to replace 'gaslighting' with 'being a bit slippery' or something less controversial.

No worries about 'sounding preachy'; it's a good thing people want to uphold the sobriety that makes HN special.


I think there are plenty of such reply on github. For example the one to AMD AI director's issue.

They didn’t say “your experience is not worse” but they did frequently say “just turn reasoning effort back up and it will be fine”. And that pretty explicitly invalidates all the (correct) feedback which said it’s not just reasoning effort.

They knew they had deliberately made their system worse, despite their lame promise published today that they would never do such a thing. And so they incorrectly assumed that their ham fisted policy blunder was the only problem.

Still plenty I prefer about Claude over GPT but this really stings.


I'm aiming for intellectual honesty here. I'm not taking a side for a person or an org, but I'm taking a stand for a quality bar.

> They knew they had deliberately made their system worse

Define "they". The teams that made particular changes? In real-world organizations, not all relevant information flows to all the right places at the right time. Mistakes happen because these are complex systems.

Define "worse". There are lot of factors involved. With a given amount of capacity at a given time, some aspect of "quality" has to give. So "quality" is a judgment call. It is easy to use a non-charitable definition to "gotcha" someone. (Some concepts are inherently indefensible. Sometimes you just can't win. "Quality" is one of those things. As soon as I define quality one way, you can attack me by defining it another way. A particular version of this principle is explained in The Alignment Problem by Brian Christian, by the way, regarding predictive policing iirc.)

I'm seeing a lot of moral outrage but not enough intellectual curiosity. It embarrassingly easy to say "they should have done better" ... ok. Until someone demonstrates to me they understand the complexity of a nearly-billion dollar company rapidly scaling with new technology, growing faster than most people comprehend, I think ... they are just complaining and cooking up reasons so they are right in feeling that way. This possible truth: complex systems are hard to do well apparently doesn't scratch that itch for many people. So they reach for blame. This is not the way to learn. Blaming tends to cut off curiosity.

I suggest this instead: redirect if you can to "what makes these things so complicated?" and go learn about that. You'll be happier, smarter, and ... most importantly ... be building a habit that will serve you well in life. Take it from an old guy who is late to the game on this. I've bailed on companies because "I thought I knew better". :/


> Define "they". The teams that made particular changes? In real-world organizations, not all relevant information flows to all the right places at the right time. Mistakes happen because these are complex systems.

Accidentally/deliberately making your CS teams ill-informed should not function as a get out of jail free card. Rather the reverse.


> Accidentally/deliberately making your CS teams ill-informed should not function as a get out of jail free card. Rather the reverse.

Thanks for your reply. I very much agree that intention or competence does not change responsibility and accountability. Both principles still apply.

In this comment, I'm mostly in philosopher and rationalist mode here. Except for the [0] footnote, I try to shy away from my personal take about Anthropic and the bigger stakes. See [0] for my take in brief. (And yes I know brief is ironic or awkward given the footnote is longer than most HN comments.) Here's my overall observation about the arc of the conversation: we're still dancing around the deeper issues. There is more work to do.

It helps to recognize the work metaphors are doing here. You chose the phrase "get out of jail free". Intentionally or not, this phrase smuggles in some notion of illegality or at least "deserving of punishment" [1]. The Anthropic mistakes have real-world impacts, including upset customers, but (as I see it) we're not in the realm of legal action nor in the realm of "just punishment", by which I mean the idea of retributive justice [2].

So, with this in mind, from a customer-decision point of view, the following are foundational:

    Rat-1: Pay attention to _effects_ of what Anthropic. did

    Rat-2: Pay attention to how these effects _affect me_.
But when to this foundation, I need to be careful:

    Rat-3: Not one-sidedly or selectively re-introduce *intent* into my other critiques. If I get back to diagnosing or inferring *intent*, I have to do so while actually seeking the whole truth, not just selecting explanations that serve my interests

    Rat-4: When in a customer frame, I don't benefit from "moralizing" ... my customer POV is not well suited for that. As a customer, my job is to *make a sensible decision*. Should I keep using Claude? If so, how do I adjust my expectations and workflow?
...

Personally, when I view the dozens of dozens I've read here, a common theme is see is disappointment. I relatively rarely see constructive and truth-seeking retrospective-work. On the other hand, I see Anthropic going out of their way to communicate their retrospective while admitting they need to do better. This is why I say this:

    Of course companies are going to screw up. The question is: as a customer, am I going to take a time-averaged view so I don't shoot myself in the foot by overreacting?
[0]: My personal big-picture take is that if anyone in the world, anywhere, builds a superintelligent AI using our current levels of understanding, there is no expectation at all that we can control it safely. So I predict with something close to 90% or higher, that civilization and humanity as we know it won't last another 10 years after the onset of superintelligence (ASI).

This is the IABIED (The book "If Anyone Builds It, Everyone Dies" by Yudkowsky and Soares) argument -- plenty of people write about it -- though imo few of the book reviews I've seen substantively engage with the core arguments. Instead, most reviewers reject it for the usual reasons: it is a weird and uncomfortable argument and the people making it seem wacky or self-interested to some people. I do respect reviews who disagree based on model-driven thinking. Everything else to me reads like emotional coping rather than substantive engagement.

With this in mind, I care a lot about Anthropic's failures and what they imply about how it participates in the evolving situation.

But I care almost zero about conventional notions of blame. Taking materialism as true, free will is at bottom a helpful fiction for people. For most people, it is the reality we take for granted. The problem is blame is often just an excuse for scapegoating people for their mistakes, when in fact these mistakes just flow downstream from the laws of physics. Many of these mistakes are nearly statistical certainties when viewed from the lens of system dynamics or sociology or psychology or neuroscience or having bad role models or being born into a not-great situation.

To put it charitably, blame is what people do when they want to explain s--tty consequences on the actions of people and systems. That sense bothers me less; I'm trying to shift thinking away from the kind of blaming that leads to bad predictions.

[1]: From the Urban Dictionary (I'm not citing this as "proof of credibility" of the definition):

    "A get out of jail free card is a metaphorical way to refer to anything that will get someone out of an undesirable situation or allow them to avoid punishment."
... I'm only citing UD so you know what mean. When I use the word dictionary, I mean a catalog of usage not a prescription of correctness.

[2]: https://plato.stanford.edu/entries/justice-retributive/


> All of this points to their priorities not being aligned with their users’.

Framing this as "aligned" or "not aligned" ignores the interesting reality in the middle. It is banal to say an organization isn't perfectly aligned with its customers.

I'm not disagreeing with the commenter's frustration. But I think it can help to try something out: take say the top three companies whose product you interact with on a regular basis. Take stock of (1) how fast that technology is moving; (2) how often things break from your POV; (3) how soon the company acknowledges it; (4) how long it takes for a fix. Then ask "if a friend of yours (competent and hard working) was working there, would I give the company more credit?"

My overall feel is that people underestimate the complexity of the systems at Anthropic and the chaos of the growth.

These kind of conversations are a sort of window into people's expectations and their ability to envision the possible explanations of what is happening at Anthropic.


>My overall feel is that people underestimate the complexity of the systems at Anthropic and the chaos of the growth.

Making changes like reducing the usage window at peak times (https://x.com/trq212/status/2037254607001559305) without announcing it (until after the backlash) is the sort of thing that's making people lose trust in Anthropic. They completely ignored support tickets and GitHub issues about that for 3 days.

You shouldn't have to rely on finding an individual employee's posts on Reddit or X for policy announcements.

That policy hasn't even been put into their official documentation nearly one month on - https://support.claude.com/en/articles/11647753-how-do-usage...

A company with their resources could easily do better.


> You shouldn't have to rely on finding an individual employee's posts on Reddit or X for policy announcements.

I agree with this as a principle. Which raises this question: is it true? Are you certain these messages don't show up in (a) Claude Code and (b) Claude on the Web?

I've seen these kinds of messages pop up. I haven't taken inventory of how often they do. As a guess, maybe I see notifications like this several times a month. If any important ones are missing, that is a mistake.

Anyhow, this is the kind of discussion that I want people to have. I appreciate the detail.

> A company with their resources could easily do better.

Yes, they could. But easily? I'm not so sure.

Also ask yourself: what function does saying e.g. "they could have done better" serve? What does it help accomplish? I'm asking. I think it often serves as a sort of self-reinforcing thing to say that doesn't really invite more thinking.

Ask yourself: If "doing better" was easy, why didn't it happen? Maybe it isn't quite as easy as you think? Maybe you've baked in a lot of assumptions. Easy for who? Easy why? Try the questions I asked, above. They are not rhetorical. Here they are again, rephrased a bit

    > take the top three companies whose product you 
    > interact with on a regular basis. Take stock of
    > (1) how fast the technology is moving;
    > (2) how often things break from your POV;
    > (3) how soon the company acknowledges it;
    > (4) how long it takes for a fix.
    >
    > Then ask "if a friend of mine (competent, hard working)
    > worked there, how would I be thinking about the situation?"
There is a reason why I recommend asking these questions. Forcing yourself to write down your reference class is ... to me, table stakes, but well, lots of people just leave it floating and then ask other people to magically reconstruct it. Envisioning a friend working there shifts your viewpoint and can shake lose many common biases.

Thanks for the example -- you are one of the first people to quote a source, so I appreciate it. This makes constructive discussion much easier. You quoted this:

    > To manage growing demand for Claude we're adjusting our
    > 5 hour session limits for free/Pro/Max subs during peak
    > hours. Your weekly limits remain unchanged.
    >
    > During weekdays between 5am–11am PT / 1pm–7pm GMT, you'll
    > move through your 5-hour session limits faster than before.
And yeah, no disagreement from me: many users are not going to like this. Narrowly speaking, I don't want any chance that reduces what I get for what I pay for. I also care about overall reliability, so if some users on the right tail of the usage distribution find themselves losing out, my take is "Yeah, they are disappointed, but this is rational decision for any company with this kind of subscription model."

Broken expectations are highly dependent on perception. People get used to having some particular level. When that changes and they notice, and being humans a strong default is to reach for something to blame. Then we rationalize. That last two parts are unhelpful, and I push back on them frequently.


So you're arguing they're just plain incompetent? Not sure that's going to win the trust of customers either.

> So you're arguing they're just plain incompetent? Not sure that's going to win the trust of customers either.

This is not a charitable interpretation of what I wrote. Please take a minute and rethink and rephrase. Here are two important guidelines, hopefully familiar to someone who has had an account since 2019:

> Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.

> Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.


I didn't assume bad faith, I simply reworded your conclusions with less soft language so that others would understand your position more clearly.

You are saying what they are doing is hard. That's fine. Their stated goals are to be the responsible stewards of the technology and we agree they are failing at that goal. You would attribute that to incompetence and not malice.


I personally try to follow Rapoport's Rules, and I since think they are consistent with the HN Guidelines, I like to mention them: [1].

I've thought on it, and I will try to start off with something we both agree on... We both agree that Anthropic made some mistakes, but this is probably a pretty uninteresting and shallow agreement. I find it unlikely that we would enumerate or characterize the mistakes similarly. I find it unlikely that we would be anywhere near the same headspace about our bigger-picture takes.

> I didn't assume bad faith

Ok, I'm glad. That one didn't concern me; if I had a do-over I would remove that one from the list. Sorry about that. These are the ones that concern me:

    > Comments should get more thoughtful and substantive,
    > not less, as a topic gets more divisive.
When I read your earlier comment (~20 words), it didn't come across as a thoughtful and substantive response to my comment (~160 words). I know length isn't a perfect measure nor the only measure, but it does matter.

    > Please respond to the strongest plausible interpretation of what
    > someone says, not a weaker one that's easier to criticize.
Are you sure you didn't choose an easier to criticize interpretation? Did you take the take to try to state to yourself what I was trying to say? Back to Rapaport's Rules ...

    > You should attempt to re-express your target’s position so
    > clearly, vividly, and fairly that your target says, “Thanks,
    > I wish I’d thought of putting it that way.”
I'm grateful when people can express what I'm going for better than the way I wrote it or said it.

> I simply reworded your conclusions with less soft language

Technically speaking, lots of things could be called "rewording", but what you did was relatively far from "simply rewording". Charitably, it is closer to "your interpretation". But my intent was lost, so "rewording" doesn't fit.

> ... so that others would understand your position more clearly.

If you want to help others understand, then it is good to make sure you understand. For that, I recommend asking questions.

> Their stated goals are to be the responsible stewards of the technology and we agree they are failing at that goal.

No, I do not agree to that phrasing. It is likely I don't agree with your intention behind it either.

> You would attribute that to incompetence and not malice.

No; even if I agreed with the premise, I think it is more likely I would still disagree. I don't even like the framing of "either malice or incompetence". These ideas don't carve reality at the joints. [2] [3] There are a lot of stereotypes about "incompetence" but I don't think they really help us understand the world. These stereotypes are more like thought-terminators than interesting generative lenses.

I'll try to bring it back to the words "malice" and "incompetence" even though I think the latter is nigh-useless as a sense-making tool. Many mistakes happen without malice or incompetence; many mistakes "just happen" because people and organizations are not designed to be perfect. They are designed to be good enough. To not make any short-term mistakes would likely require too much energy or too much rigidity, both of which would be a worse category of mistake.

Try to think counterfactually: imagine a world where Anthropic is not malicious nor incompetent and yet mistakes still happened. What would this look like?

When you think of what Anthropic did wrong, what do you see as the lead up to it? Can you really envision the chain of events that brought it about? Imagine reading the email chain or the PRs. Can you see how there may be been various "off-ramps" where history might have gone differently? But for each of those diversions, how likely would it be that they match the universe we're in?

At some point figuring out what is a "mistake" even starts to feel strange. Does it require consciousness? Most people think so. But we say organizations make mistakes, but they aren't conscious -- or are they? Who do we blame? The CEO, because the buck stops there, right? He "should have known better". But why? Wait, but the Board is responsible...?

Is there any ethical foundation here? Some standard at all or is this all just anger dressed up as an argument? If this assigning blame thing starts to feel horribly complicated or even pointless, then maybe I've made my point. :)

If nothing else, when you read what I write, I want it to make you stop, get out a sheet of paper, and try to imagine something vividly. Your imagination I think will persuade you better than I can.

[1]: https://themindcollection.com/rapoports-rules/

[2]: https://jollycontrarian.com/index.php?title=Carving_nature_a...

[3]:https://english.stackexchange.com/questions/303819/what-do-t...


Do you not think people here work at big companies with big products? I do, and we have a much higher bar for shipping.

This was also my first impression. But it seems to me the changes are mostly about swapping what panels dock where (left or right) and maybe some additions/tweaks around the AI panels. On macOS these are still the same:

    ⌘B : toggle the left dock
    ⌘R : toggle the right dock
If you opt-in to the new layout, the panels that used to sit in the left dock are now in the right dock. I will give it a try even for classic coding. One can change what panels get docked where from the settings window.

I'll bet if you point out the issues where this is described and measured, you'll get some eyeballs.

Being a good regulator is about solving a nearly impossible satisficing problem. You have to follow the law and achieve achieve results with a limited budget and political constraints. Given the priorities of say the FTC or state AGs or the SEC, I don't think GitHub is even a blip on their radar. Of any of the regulators I would hazard to guess that maybe the California Privacy Protection Agency is the most likely to prioritize a look, but I still doubt it.

I know lots of idealists -- I went to a public policy school. And in some areas, I am one myself. We need them; they can push for their causes.

But if you ever find yourself working as a regulator, you'll find the world is complicated and messy. Regulators that overreach often make things worse for their very causes they support.

If you haven't yet, go find some regulators that have to take companies all the way to court and win. I have know some in certain fields. Learn from them. Some would probably really enjoy getting to talk to a disinterested third-party to learn the domain. There are even ways to get involved as a sort of citizen journalist if you want.

But these sort of blanket calls for "make an example of GitHub" are probably a waste of time. I think a broader view is needed here. Think about the causal chain of problems and find a link where you have leverage. Then focus your effort on that link.

I live in the DC area, where ignorance of how the government works leads to people walking away and not taking you seriously. When tech people put comparable effort into understanding the machinery of government that they do into technology, that is awesome. There are some amazing examples of this if you look around.

There are no excuses. Tech people readily accept that they have to work around the warts of their infrastructure. (We are often lucky because we get to rebuild so much software ourselves.) But we forget what it's like to work with systems that have to resist change because they are coordination points between multiple stakeholders. The conflict is by design!

Anyhow, we have no excuse to blame the warts in our governmental system. You either fix them or work around them or both.

The world is a big broken machine. Almost no individual person is to blame. You just have to understand where to turn the wrench.


Thinking out loud: what are the best practices to vet a tools' telemetry details? The devil is in the details.

A quick summary of my Claude-assisted research at the Gist below. Top of mind is some kind of trusted intermediary service with a vested interest in striking a definable middle ground that is good enough for both sides (users and product-builders)

Gist: WIP 31 minutes in still cookin'


Hey, please don't blindly paste/post from LLMs, please.

I appreciate the "please", but this comes across as presumptive. First, you don't know the effort level I put in. Second, you haven't seen the end result. Third, why do you think I would "blindly paste" from an LLM? If you take a look at my profile or other comments, I hope that is clear.

I appreciate feedback in general, and I am glad when people care about making HN a nice place for discussion and community. Sometimes a well-meaning person goes a little too far, and I think it happened above. That's my charitable interpretation. It is also possible that in this age of AI, people are understandably pissed and sending that frustration out into the world. When that happens, just remember the people reading it matter too.

About me: I would not share something unless I think it has value to at least one other person on HN. I've done a lot of work about data and privacy in general (having worked at a differential privacy startup in the past), but I'm much newer to the idea of digging into ways of making telemetry gathering more transparent. I haven't found great resources on the Web about this yet, which is why I started doing the research. And I'm going to share it for others to read, criticize, build on top of, etc.


Where is the gist? I assumed LLM/bot because of the disconnect between "here's a gist" and "still cookin"

I ask everyone to be a bit more careful about the "assume LLM/bot" thing. That hair-trigger is often counterproductive.

Anyhow, the Claude research took 36 minutes to run, so I put it to the side and didn't link it originally. I'm still thinking through it -- there is a lot to cover : https://gist.github.com/xpe/654af2731d40a145e1d0b8b694fe8fd3


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: