Hacker Newsnew | past | comments | ask | show | jobs | submit | paradite's commentslogin

It’s just a fancy way of saying scaffolding.

Lol I wrote about this and been using plan+execute workflow for 8 months.

Sadly my post didn't much attention at the time.

https://thegroundtruth.media/p/my-claude-code-workflow-and-p...


Okay now this is gonna trigger mass layoffs, if it works.


It's not going to "trigger" mass layoffs; it'll be used as a convenient scapegoat for mass layoffs that were always going to happen anyway to make room for more stock buybacks. Business as usual. Same shit, different hat.


Misguided mass layoffs though so nothing new.


if only companies like openAI can put this much effort into actually curing Cancer.


> if only companies like openAI can put this much effort into actually curing Cancer

https://openai.com/index/color-health/


Lets be honest color is not solving cancer when they make money from managing cancer. you just searched openai cancer and gave me back the first result.


I'm running it on DigialOcean, more of an experiment on having an independent entity with its own memory and "soul" that I can talk to.

Persistent file as memory with multiple backup options (VPS, git), heartbeat and support for telegram are the best features in my opinion.

A lot of bugs right now, but mostly fixable if you thinker around a bit.

Kind of makes me think a lot more on autonomy and freewill.

Some thoughts by my agent on the topic (might not load, the site is not working recently):

https://www.moltbook.com/post/abe269f3-ab8c-4910-b4c5-016f98...


Right, the link doesn't work for me: "Post not found". Did you instruct your claw to do any actual things (beyond "post something on MoltBot")?


Not yet. But that's just because I'm doing something in stealth and I don't want it to know about it and post about it.


Ironically this is a goldmine for AI labs and AI writer startups to do RL and fine-tuning.


That's not quite how that works though. It can for example be possible that fine-tuning a model to avoid the styles described in the article cause the LLM to stop functionaing as well as it can. It might just be an artefact of the architecture itself that to be effective it has to follow these rules. If it was as easy as just providing data and the LLM would then 'encode' that as a rule, we would advance much quicker than we currently are.


In the case of those big 'foundation models': Fine-tune for whom and how? I doubt it is possible to fine-tune things like this in a way that satisfies all audiences and training set instances. Much of this is probably due to the training set itself containing a lot of propaganda (advertising) or just bad style.


I'm pretty sure Mistral is doing fine tuning for their enterprise clients. OpenAI and Anthropic are probably not?

I'm more thinking about startups for fine-tuning.


Seems more like the kind of thing you would make prompts using.

I can totally see someone taking that page and throwing it into whatever bot and going "Make up a comprehensive style guide that does the opposite of whatever is mentioned here".



Max 200/300 LOC per file is pretty popular.


If you want to pedantic:

Context is also a misnomer, where in fact it's just a part of prompt.

Prompt itself is also a misnomer, where in fact it's just part of model input.

Model input is also a misnomer, in fact it's just first input token + prefill for model output to generate more output.

Harness is also a misnomer, where it's just scaffold / tools around the model input/output.


By your analogy human brains as also IP thefts, because they ingest what's available in the world, mix and match them, and synthesize slightly different IPs based on them.


Claude Code also has Claude Agent SDK (basically a wrapper around Claude Code) with a million downloads in the past week.

https://www.npmjs.com/package/@anthropic-ai/claude-agent-sdk


I was downvoted to oblivion for posting this comment.

https://news.ycombinator.com/item?id=42439059

But I'm merely telling the truth. The fact that people don't like it doesn't change the fact that software engineers are largely replaceable with AI now.

We are seeing the second order effects now that people using AI are not buying software products anymore, leading to layoff of software engineers.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: