Hacker Newsnew | past | comments | ask | show | jobs | submit | kaoD's commentslogin

I wonder if the child safety section "leaks" behavior into other risky topics, like malware analysis. I see overlap in how the reports mention that once the safety has been tripped it becomes even more reluctant to work, which seems to match the instructions here for child safety.

> What can't Claude do at this point?

Writing maintainable code that scales.


To give a different perspective: archival is important. If nobody does this job, generational knowledge is lost at some point.

I talked plenty with my grandpa, but I'm sure he didn't even tell me 20% of his life.

And my other grandpa died when I was still a kid, so I didn't even get to have adult conversations with him.

Imagine making this available to your grandgrandgrandson.


Yeah, but you're kaoD. You're a bonafide person. You should talk with other people; it's good. (We're chatting right now.)

That's quite different from chatting with a bot that pretends to be human. (Do you want to chat with my bot?)


Yes. And I will die along with the memories from my grandpa. Most of them died already with him, and I don't remember all our conversations.

I have no kids but, even if I did, let's say I'd pass 20% of the 20% he passed on to me, and they pass 20% of the 20% of the 20%... You get the idea.

Heck, I already forgot 50% of my life since I don't have a journal!

This is not an "either" situation. Archival is important.

People write memories for a reason. This is automating the process, not superseding human communication.

I am the sort of person that never took photos (live in the moment yadda yadda). 15 years later, I'm starting to regret it.


Why does it have to be black and white? Why can't a bot do the exploration and notetaking along with people in the channel?

That's not how I interpreted it as being in this instance, but it could certainly be that way.

I guess that'd be like keeping all correspondence in a shoe box (to be reviewed later -- or maybe never), or maybe the automated recording of my phone calls with others (which is completely legal where I am; I don't even have to tell them).

And I suppose whether I felt that would be creepy or not depends a lot upon intent, and consent.

If the intent were pure and good, and the consent both informed and granted, then I'd have no problem with any of this at all -- whether a shoebox, a tape recorder, or a bot is involved in taking the notes.


I called my parents, told them about the idea, they never even had Telegram before we started this project but they especially joined when they learnt that I was trying to build a family history. They are native Nepalese speakers therefore the system promptensured that the bot always responds to their questions and answers in Nepalese.

It is really easy to way over think, or over feel, AI.

Sometimes it's just a really good interface that matches the task well.

Think of all the people that still avoided getting a computer a decade or two ago, because "online" was so unnatural and creepy to them. Obviously, the internet had and has those places. And frankly a lot of social media still is.

But it can also just be wikipedia, making flight reservations, etc. When that is all it is doing, what you want it to do, that is all it is.

An automated language interface can just be a really good note collector/collator.

Personally, I look forward to the wise, well dressed, well spoken, waist-up robot bartenders we have been promised by movies for decades. Not creepy at all!


Or just use SSH.

> It's service providers the whole way down.

And still likely better than heavily regulated airwaves.


DigitalOcean is the Arduino of cloud.

True, it can't compete with AWS/GCP/Azure if you're large scale. But most of us are not large scale, we just need a no frills experience instead of dealing with 27 nested panels just to spin up a VM.


Code is not the moat, it's the gateway drug to their subscription (hence why they just locked other harnesses from using their subscription).

And the subscription is not Anthropic's moat either since it's likely heavily subsidized. They're just using it to acquire customers.

The moat is locking you into Anthropic's model particularities (extended thinking, getting you into their "mindset", etc.)


> it's still abstraction by definition

I dislike arguing semantics but I bet it's not an abstraction by most engineers' definition of the word.


And they're as deterministic as as the underlying thing they're abstracting... which is kinda what makes an abstraction an abstraction.

I get that people love saying LLMs are just compilers from human language to $OUTPUT_FORMAT but... they simply are not except in a stretchy metaphorical sense.

That's only true if you reduce the definition of "compiler" to a narrow `f = In -> Out`. But that is _not_ a compiler. We have a word for that: function. And in LLM's case an impure one.


> this project, even if somewhat spaghettified, will likely take orders of magnitude less time to perfect than it would for someone to create the whole thing from scratch without AI

That's a big leap of faith and... kinda contradicts the article as I understood it.

My experience is entirely opposite (and matches my understanding of the article): vibing from the start makes you take orders of magnitude more time to perfect. AI is a multiplier as an assistant, but a divisor as an engineer.


vibing is different from... steering AI as it goes so it doesn't make fundamentally bad decisions


Both of these are not really the right way to use AI to code with. There are two basic ways to code with AI that work:

1. Autocomplete. Pretty simple; you only accept auto-completes you actually want, as you manually write code.

2. Software engineering design and implementation workflow. The AI makes a plan, with tasks. It commits those plans to files. It starts sub-agents to tackle the tasks. The subagents create tests to validate the code, then writes code to pass the tests. The subagents finish their tasks, and the AI agent does a review of the work to see if it's accurate. Multiple passes find more bugs and fix them in a loop, until there is nothing left to fix.

I'm amazed that nobody thinks the latter is a real thing that works, when Claude fucking Code has been produced this way for like 6 months. There's tens of thousands of people using this completely vibe-coded software. It's not a hoax.


#2 does not negate my steering suggestion, so I'm not sure how you can conclude nobody thinks it's a real thing that works

also Claude Code is notoriously poorly built, so I wouldn't tout it as SOTA


I have worked at companies from startups to fortune 500. They all have garbage code. Who cares? It works anyway. The world is held together with duct tape, and it's unreasonably effective. I don't believe "code quality" can be measured by how it looks. The only meaningful measure of its quality is whether it runs and solves a user's problem.

Get the best programmer in the world. Have them write the most perfect source code in the world. In 10 years, it has to be completely rewritten. Why? The designer chose some advanced design that is conceptually superior, but did not survive the normal and constant churn of advancing technology. Compare that to some junior sysadmin writing a solution in Perl 5.x. It works 30 years later. Everyone would say the Perl solution was of inferior quality, yet it provides 3x more value.


I hear you about "it just works" mattering infinitely more than some arbitrary code quality metric

but I'm not judging Claude Code by how it looks. I kinda like the aesthetics. I'm talking about how slow, resource hungry and finnicky/flickery it is. it's objectively sloppy


> when Claude fucking Code has been produced this way for like 6 months

And people can look at the results (illegally) because that whole bunch of code has been leaked. Let's just say it's not looking good. These are the folks who actually made and trained Claude to begin with, they know the model more than anyone else, and the code is still absolute garbage tier by sensible human-written code quality standards.


Yet it works anyway. What does that say about human code quality standards?


Human code quality standards are built around the knowledge that humans prefer polished products that work consistently. You can get away without code quality in the short term, especially if you have no real competitors - to a lot of people, there just aren't any models other than Anthropic's which are particularly useful for software development. But in the long term it gets you into a poor quality trap that's often impossible to escape without starting over from scratch.

(Anthropic, of course, believes that advances in AI capability over the next few years will so radically reshape society that there's no point worrying about the long term.)


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: