Hacker Newsnew | past | comments | ask | show | jobs | submit | williamcotton's commentslogin

What about trade secrets, breach of contract, etc, etc?

Apparently it's possible to download a whole load of books illegally, but still train AI models on them without those getting pulled after you get found out.

The same reasoning may apply here :P


Yeah, but you don't have trillions of dollars of investments riding on your success, so the rules still apply to you.

Trade secrets once made public don't have any legal protection and I haven't signed any contract with anthropic

They published the code on their own, none of that applies.

Undo:

  Ctrl + _ (Ctrl + underscore)

it did not work for me in putty, so i added ctrl-x + ctrl-u too:

  bind '"\C-x\C-u": undo'
  bind '"\C-_": undo'

I agree 100%. Boring old software skills are part of what it took to "write" this DSL, complete with a fully featured LSP:

https://github.com/williamcotton/webpipe

https://github.com/williamcotton/webpipe-lsp

(lots of animated GIFs to show off the LSP and debugger!)

While I barely typed any of this myself I sure as heck read most of the generated code. But not all of it!

Of course you have to consider my blog to be "in production":

https://github.com/williamcotton/williamcotton.com/blob/main...

The reason I'm mentioning this project is because the article questions where all the AI apps are. Take a look at the git history of these projects and question if this would have been possible to accomplish in such a relatively short timeframe! Or maybe it's totally doable? I'm not sure. I knew nothing about quite a bit of the subsystems, eg, the Debug Adapter Protocol, before their implementation.


I recently "vibe coded" a long term background job runner service... thing. It's rather specific to my job and a pre-existing solution didn't exist. I already knew what I wanted the code to be, so it was just a matter of explaining explicitly what I wanted to the AI. Software engineering concepts, patterns, al that stuff. And at the end of the day(s) it took about the same amount of time to code it with AI than it would've taken by hand.

It was a lot of reviewing and proofreading and just verifying everything by hand. The only thing that saved me time was writing the test suite for it.

Would I do it again? Maybe. It was kinda fun programming by explaining an idea in plain english than just writing the code itself. But I heavily relied on software engineering skills, especially those theory classes from university to best explain how it should be structured and written. And of course being able to understand what it outputs. I do not think that someone with no prior software engineering knowledge could do the same thing that I did.


Undo (typing):

  Ctrl + _ (Ctrl + underscore)
Applies to the line editor outside of CC as well.

Lines of code are meaningful when taken in aggregate and useless as a metric for an individual’s contributions.

COCOMO, which considers lines of code, is generally accepted as being accurate (enough) at estimating the value of a software system, at least as far as how courts (in the US) are concerned.

https://en.wikipedia.org/wiki/COCOMO


No one has any idea how to estimate software value, so the idea that some courts in the US have used a wildly inaccurate system that considers LOC is so far away from evidence that LOC is useful for anything that I can’t believe you bothered including that.

LOC is essentially only useful to give a ballpark estimate it complexity and even then only if you compare orders of magnitude and only between similar program languages and ecosystems.

It’s certainly not useful for AI generated projects. Just look at OpenClaw. Last I heard it was something close to half a million lines of code.

When I was in college we had a professor senior year who was obsessed with COCOMO. He required our final group project to be 50k LOC (He also required that we print out every line and turn it in). We made it, but only because we build a generator for the UI and made sure the generator was as verbose as possible.


They gave a widely accepted way to estimate value, and your counter argument is that that is inaccurate. Fine but how can you be confident about that? I see only one way which is for you to come up with a better way and then show that by your better estimation, COCOMO is bad. Until you do that, all your argument goes down to is vibes.

Your example about OpenClaw works exactly against your own argument by the way: OpenAI acquired it for millions by all accounts.


COCOMO has been shown to be inaccurate numerous times. Google it. Here’s one result.

“A very high MMRE (1.00) indicates that, on average, the COCOMO model misses about 100% of the actual project effort. This means that the estimate generated by the model can be double or even greater than the actual effort. This shows that the COCOMO model is not able to provide estimates that are close to the actual value.”

No one in the industry has taken COCOMO seriously for nearly 2 decades.

>OpenClaw

1. OpenAI bought the vibes and the creator. Why would they buy the code? It’s open source.

2. You don’t seriously think OpenClaw needs half a million lines of code to provide the functionality it does do you?

Seriously just go look at the code. No one is defending that as being an efficient use of code.

https://journal.fkpt.org/index.php/BIT/article/download/2027...


> No one in the industry has taken COCOMO seriously for nearly 2 decades.

The funny thing is that we've just discussed how people do take it seriously. It's just that you don't like that. And what do you offer as an alternative?

Like I said, vibes. You think that the value of some software is something you can only "feel". That's not how an engineer thinks. If you're engineer you should know that if you can't measure it, you can't say anything at all about it. Which means you cannot discount any alternative method until you've got a better way. But clearly you can't think like an engineer.


I don’t know what to tell you. All the evidence says COCOMO is too inaccurate to use. Show me evidence that says it’s accurate.

Just because someone wrote a book and a few bankruptcy trustees used it doesn’t magically make it accurate. Just because something is systematic doesn’t mean it’s worth using.

If you do a bit of googling you’ll find that the majority of studies show that systemic models don’t outperform expert guesses. So yep vibes are general just as good.

Show me a large tech company that currently uses COCOMO to plan software projects.

Also if you are a dev outside of NASA or another safety critical industry and you think you’re an engineer, you’re kidding yourself.

Oh and try not to sound like an asshole next time.


Many people also take tarot card reading seriously as a way to predict the future.

As an engineer, you are not required to come up with a better way of predicting the future before you can dismiss tarot. You need only show that it doesn't work.


I think that's a "looking under the lamp post because that's where the light is" metric.

I'm not sure most developers, managers, or owners care about the calculated dollar value of their codebase. They're not trading code on an exchange. By condensing all software into a scalar, you're losing almost all important information.

I can see why it's important in court, obviously, since civil court is built around condensing everything into a scalar.


> Lines of code are meaningful when taken in aggregate

The linked article does not demonstrate this. It establishes no causal link. One can obviously bloat LOC to an arbitrary degree while maintaining feature parity. Very generously, assuming good faith participants, it might reflect a kind average human efficiency within the fixed environment of the time.

Carrying the conclusions of this study from the 80s into the LLM age is not justified scientifically.


COCOMO estimates the cost of the software, not the value. The cost is only weakly correlated with value.

> Lines of code are meaningful when taken in aggregate and useless as a metric for an individual’s contributions.

Yes, and in fact a lot of the studies that show the impact of AI on coding productivity get dismissed because they use LoC or PRs as a metric and "everyone knows LoC/PR counts is a BS metric." But the better designed of these studies specifically call this out and explicitly design their experiments to use these as aggregate metrics.


> at least as far as how courts (in the US) are concerned.

That's an anti-signal if we're being honest.


I am writing a book! I used AI to write 1 billion words this morning!

>at least as far as how courts are concerned.

Courts would be the last place to understand something like code quality or software project value....


That’s not how we started down this path. See snark-free sibling comment from padjo.


Both my claim and theirs are unsupported by evidence, therefore they are equally valid.


A third argument is that it was because of aliens from the planet Blotrox Prime. But I suppose without evidence we'll just have to accept that all three theories are equally probable.


Interesting how you decided to switch to hyperbole instead of providing evidence for your claim. Backing up your viewpoint would have easily shut me down, putting the ball in my court to do the same. Instead you gave a knee-jerk childish response.


Interesting that rather than try to bolster your claim you resorted to a logical fallacy to justify it.


Hypocritical; you did the same with the hyperbole. Why are you stooping to my level instead of being the better person?


Nope. Just a reductio as absurdum that you decided to counter by asking that I maintain higher standards of debate than you.

The notion that atomic architecture came about because people are stupid and performative is not really useful. Its fairly misanthropic and begs the question why it became so prevalent in JS specifically.


”The current market share of custom-built homes is approximately 19% of total single-family starts”

https://www.nahb.org/blog/2025/08/custom-home-building-grows...


New multifamily construction in the US that has to undergo design review is arguably fairly custom in that each site will have different requirements. I think it's fair to say that commoditization is a spectrum?


The structure of most residential construction in the US is standardized. Foundation (or slab), wood framing, etc. There are different levels of quality, but codes and standards mean that standardization is the norm.


tract != multifamily


I was not talking about tract housing. Where I live there is no tract housing construction.


Yes, to be clear I was intentionally not responding to the GP directly.

  Browser -> your server route -> server calls API -> server renders HTML -> htmx swaps it?


Sure it has. See the modernism as a whole.


Modernism wasn't about "pushing limits of what's possible" either. It was first and foremost a period style itself. That style included experimentation and "pushing some limits" but art in general wasn't that, then, before or after (which is also why those limits went right back, and literature for example returned to far more classical forms after modernism's era passed - it didn't kept pushing at limits).


Correct me if I’m wrong, but if you wrote a dependency-free recursive descent parser in C89 thirty years ago it should still compile and return the same AST.


Well, if it made use of any UB alongside its code, and it gets compiled with the latest version of a modern compiler in -O3, it might, or might not.


I mean....it will compile and return the same AST on the OS and hardware from 30 years ago. But if you want to get the same result today on modern hardware / software you may discover you need to make some changes (or rather people have been making little changes for 30 years to ensure you can still get the same AST). Generally software has either had little bits and bobs added and removed to keep it relevant or its fallen away and been forgotten.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: