Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm not sure we can draw useful conclusions from the Claude Code written C compiler yet. Yes, it can compile the Linux kernel. Will it be able to keep doing that moving forward? Can a Linux contributor reliably use this compiler to do their development, or do parts of it simply not work correctly if they weren't exercised in the kernel version it was developed against? How will it handle adding new functionality? Is it going to become more-and-more expensive to get new features working, because the code isn't well-factored?

To me this doesn't feel that many steps above using a genetic algorithm to generate a compiler that can compile the kernel.

If we think back to pre-AI programming times, did anyone really want this as a solution to programming problems? Maybe I'm alone in this, but I always thought the problem was figuring out how to structure programs in such a way that humans can understand and reason about them, so we can have a certain level of confidence in their correctness. This is super important for long-lived programs, where we need to keep making changes. And no, tests are not sufficient for that.

Of course, all programs have bugs, but there's a qualitative difference between a program designed to be understood, and a program that is effectively a black box that was generated by an LLM.

There's no reason to think that at some point, computers won't be able to do this well, but at the very least the current crop of LLMs don't seem to be there.

> and now they can churn out millions of code easily.

It's funny how we suddenly shifted from making fun of managers who think programmer's should be measured by the number of lines of code they generated, to praising LLMs for the same thing. Why did this happen? Because just like managers, programmers letting LLMs write the code aren't reading and don't understand the output, and therefore the only real measure they have for "productivity" is lines of code generated.

Note that I'm not suggesting that using AI as a tool to aid in software development is a bad thing. I just don't think letting a machine write the software for us is going to be a net win.

 help



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: