Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm not trying to convert you, just want to share process tips that I see working for me and others. We're using agents, not a chat, because they can do complex work in pursuit of a goal.

1. Make artifacts. If you're doing research into a tech, or a hypothesis, then fire off subagents to explore different parts of the problem space, each reporting back into a doc. Then another agent synthesizes the docs into a conclusion/report.

2. Require citations. "Use these trusted sources. Cite trusted sources for each claim. Cite with enough context that it's clear your citations supports the claim, and refuse to cite if the citation doesn't support the claim."

3. Review. This lets you then fire off a subagent to review the synthesis. It can have its own prompt: look for confirming and disconfirming evidence, don't trust uncited claims. If you find it making conflation mistakes, figure out at what stage and why, and adjust your process to get in front of them.

4. Manage your context. LLM only has a fixed context size ("chat length") and facts & instructions at the front of that tend to be better hewn to than things at the end. Subagents are a way of managing that context to get more from a single run. Artifacts like notebooks or records of subagent output move content outside the context so you can pick up in a new session ("chat") and continue the work.

It's less fun that just having a chat with ChatGPT. I find that I get much better quality results using these techniques. Hope this helps! If you're not interested in doing this (too much like work, and you already have something that works), it's no skin off my nose. All the best!



Thanks for the thoughtful reply! I definitely want to try a more complex setup when I have more time on my hands



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: