Hacker Newsnew | past | comments | ask | show | jobs | submit | codyswann's commentslogin

This doesn't make sense. I think you mean "If you are really good at something, you'll find AI might not be as good as the something you are really good at"

I think they mean knowing that AI only looks to be good at something pulls back the curtain in a sense and afterwards all appearances of AI seeming good at something seem fake.

Ask it to take control of a browser using something like Playwright and use the UI itself like an end user would and evaluate whether it is a good experience.

Code isn't the moat and it hasn't been for quite some time. Data is the moat.


It's very problematic for companies, mainly because of the tooling. Large companies are equating lovable, replit, bold, V0 with Claude Code, Codex, etc all under the "Vibe Coding" banner.

I try to fit the former under the banner of "Prompt-to-app Tools" and the latter as "Autonomous AI Engineering"


Better for sure.

The ascendancy of non-descriptive, jargon for everything is irritating as hell. If something is supposed to mean "AI-generated code" then it needs to contain at least the important word from that description. Sad that this has to be explained now.


Alternate headline: "Parents failed to dissuade son from killing himself"


Even if everyone on earth agrees with an opinion, it's still an opinion. There is a material difference between a fact and an opinion.


No. Facts are facts. Opinions are opinions. And statements of fact are unverified facts.

I wish people would start understanding the difference.

"Ice cream is cold" is an opinion.

"Ice cream melts at 50 degrees Fahrenheit" is a statement of fact.


Agreed. If your identity is your ability to bang away on a keyboard writing instructions to a computer in Python (or any other "language"), you're in for a bad time.

If your identity is solving difficult, domain-specific software-based problems, efficiently and securely, it doesn't matter if your instructions are written in English, French or... Python.


One point where, I think, the analogy fails is context.

If one wants to modify a code base, it's necessary to be able to, sort of, load the program into ones head and then work off a mental model. The "slowness" of traditional development and the tooling around it gave people enough time to do this and over time, get really good at a navigating and changing a code base.

With LLMs being able to generate huge amounts of code in a short time, this is missing. The LLM doesn't fully know what it generated and the nuances. The developer doesn't have the time to absorb all that so at the end of the day, you have something running which nobody (including the original AI author) really understands. That's risky.

Of course, there are ways to mitigate and handle this I don't know if the original analogy is missing this.


Is that a "yes" on lint rules? AI needs determinism to block commits because once the slop hits code review, it's already a gigantic waste of time. AI needs self-correcting loops.


It supports fully deterministic rules, which we use LLMs to help you write.

Agreed on all of this too. This is why we built the CLI tool - to shift left the work.


As a manager, I think weekly 1:1s are a waste of time.

As an employee, I think weekly 1:1s are a waste of time.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: