Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I started programming over 40 years ago because it felt like computers were magic. They feel more magic today than ever before. We're literally living in the 1980s fantasy where you could talk to your computer and it had a personality. I can't believe it's actually happening, and I've never had more fun computing.

I can't empathize with the complaint that we've "lost something" at all. We're on the precipice of something incredible. That's not to say there aren't downsides (WOPR almost killed everyone after all), but we're definitely in a golden age of computing.





> we're definitely in a golden age of computing.

Certainly not. Computers are still magic, but much of that magic is now controlled and being restricted by someone other than you.

Today most people's only computer is a cell phone, which is heavily locked down and designed for media consumption and to collect and give away every scrap of their personal/private data. Most people's desktop computers aren't much better. They are continuously used by others against the interests of the people who paid for them, sometimes explicitly keeping them from doing things they want or limiting what they can install.

People are increasingly ignorant of how computers work in ways that were never possible when you had to understand them to use them. SoCs mean that users, and even the operating system they use, aren't fully aware of what the devices are doing.

People have lost control of the computers they paid for and their own data. They now have to beg a small number of companies for anything they want (including their own data on the cloud). We're heading toward a future where you'll need a submit to a retinal scan just to view a website.

Computing today is more adversarial, restricted, opaque, centralized, controlled, and monitored than it has been in a very long time. "My computer talks to me" is not making up for that.


What you're saying might be true, but it's also a choice to delegate responsibility to someone other than yourself. I'm not saying that the adversarial state of computing is ok, just that most people don't care, or don't like the alternatives.

Even as someone concerned with the issues you mention, the shift happening now feels pretty magical to me. I can only imagine how non-technical people must feel.


People definitely care about things that a more open platform brings you, but today's open platforms have really bad downsides. The thing is, those downsides are artificial. They were manufactured by the corporations that prefer to be in control of our devices. It's not the natural state of things.

I often get asked by friends and family "can I get rid of annoyance X" or "can I have feature Y" on their Android phones, usually because they see that I've done it on my phone [0]. The answer is always "yes, I can set that up for you, but this will take an hour, I need to wipe all your data and a bunch of your apps will stop working".

There is no reason it should be like that. That was a choice by the manufacturers. They developed these DRM features and actively market them to developers - to the point where I can't submit an update to my little bus app without getting a prompt to add SafetyNet to it. They even somehow concinced pentesters to put "no cert pinning, root check and remote attestation" into their reports, so bank and government apps are the worst offenders.

It's not like people decided they prefer closed to open. They prefer working to non-working. And open platforms were broken intentionally by the developers of the closed ones.

It's like saying Americans all love their cars and simply decided not to use public transport. No, their public transport was crippled to the point of uselessness and their neighbourhoods were built in a way that makes public transport unfeasible. Cars work for them and trains don't. This was not their choice and it's painfully obvious when you see them go literally anywhere else on the planet and be amazed at how great trains are.

[0] Things like: global adblock, removing bloatware, floating windows or splitsceen, miracast, slide for brightness/volume, modded apps, lockscreen gestures, app instances, working shared clipboard, NFC UID emulation, automatic tethering, audio EQ...


> People definitely care

Sure people will care about things on paper or in conversation, but my point is that most don't care enough to do anything about it.

> There is no reason it should be like that

Most businesses exist primarily to make money, so they have all the reasons for their bad designs and behavior.

> They prefer working to non-working

Of course, but TANSTAAFL. We keep rewarding the providers with our money and data, so the beatings will continue if you want to keep up with the Joneses.

I hear the point you're making with the comparison to transportation, but you can't just build a road or a railway, while you can absolutely build software.


> They prefer working to non-working.

This sums up many things perfectly. I'll be stealing this.


Sure it's technically always a choice, but because society exists, some options are dramatically more plausible than others.

For example, say phones become more and more locked down and invasive. Technically you can choose not to have a phone, but how are you meant to function in today's society without a phone? Basically everything of importance assumes you have a phone. Technically you could make your own phone, I guess, but that's very difficult.

I don't think you can reasonably make the argument that because technically everyone can make their own choices, we should be ok with whatever status quo in society.


I know, the expectation of phones and "just install our app" sucks, but it's easier than the alternatives for most people.

I don't think we should be ok with the status quo, and I think complaining about issues can be a catalyst for change, but rather than just complain about the state of affairs, I'm pointing out that alternatives exist, so it's on us to enact change.

TBH, I'm pessimistic about my words making a difference, but I want to promote independent/DIY mindset anyway. It's ironic that the frontier LLMs are proprietary platforms, yet they're enabling more independence to their users. Regardless, if everything goes to shit, we can still opt out and go back to the previous generation's lifestyle. No mobile phones and moving at the speed of snail mail doesn't sound all that bad, though I'd sure miss Google Maps.


Its not a boolean choice. How often, and how you use a phone matters as well. While I am no stranger to screen time, my phone sees very limited and specialized use. I look at the weather, I talk to my car, I text when I am away from my desks. I am not using my phone now.

"Basically everything of importance assumes you have a phone" -- this is far from the truth in my world. It seems that how one uses a modern smartphone shapes one's world view of what's valued and what's possible.

When I visit my parents, I often fly to the major airport a hundred miles or so from them and take a bus from the airport to their town. There used to be a desk in the bus station attached to the airport where you could buy tickets, ask the clerk when the next bus to your destination was, etc. A few years ago they got rid of the desk and have a sign with a QR code to download an app that gives schedules and let's you buy tickets. There is no other way to ride the buses now. This is just one example of how there's an assumption of "everyone has a smart phone" these days.

Don’t understand != don’t care.

Most people's only computer??? MOST people in the 80's had never, personally, touched a computer other than maybe an ATM machine. The fact that most people today don't care about a personal computing device in terms of what it does or how it does it isn't really a surprise.

Most people don't care how the toaster or microwave work, only that they do. Same for the show me movies boxes in the living rooms. And, really, most people shouldn't have to care.

This isn't to dismiss privacy concerns or even right to own/repair... let alone "free" internet. It's just that most people shouldn't have to care about most things.


Models you can run on your own (expensive) computer are just a year behind the SOTA. Linux exists. Why are you so pessimistic?

Typical HN comment. They’re so in the weeds of edge case 1% concerns they can’t see the golden age around them.

Most people living through golden ages might not know it. Many workers in Industrial Revolution saw a decline in relative wages. Many in the Roman Empire were enslaved or impoverished. That doesn’t mean history doesn’t see these as golden ages, where golden age is defined loosely as a broad period of enhanced prosperity and productivity for a group of people.

For all its downsides, pointed out amply above, the golden age of computing started 100 years ago and hasn’t ceased yet.


> Many workers in Industrial Revolution saw a decline in relative wages.

Yeah! Why weren't all those children with mangled limbs more optimistic about the future? Why weren't they singing the praises of the golden age around them? Do you think it would have resulted in a golden age for anyone except a very small few if the people hadn't spoken out against the abuses of the greedy industrialists and robber barons and united against them?

If you can't see what's wrong with what's happening in front of you today and you can't see ahead to what's coming at you in the future you're going to be cut very badly by those "edge cases". Instead of blinding ourselves to them, I'd recommend getting into those weeds now so that we can start pulling them up by their roots.


The question should be "golden age FOR WHOM?" because the traditional meaning of that phrase implies a society-wide raising of the quality of life. It remains to be seen whether the advent of AI signifies an across-the-board improvement or a furthering of the polarization between the haves and have nots.

"the golden age of computing started 100 years ago"

Only 14% of Americans described themselves as "very happy" in recent studies, a sharp decline from 31% in 2018.

woohoo we did it, our neighbors are being sent to prison camps who work with the "golden age" bringers. Go team. Nice "golden age" you got there, peasant.


A gold rush is not the same thing as a golden age.

So what your saying is lots of people being unemployed and dying from lack of resources is merely a "downside" and we should all just support your mediocre idea of what a "golden age" is?

You're right, this right here is the typical HN comment.

The golden age for me is any period where you have the fully documented systems.

Hardware that ships with documentation about what instructions it supports. With example code. Like my 8-bit micros did.

And software that’s open and can be modified.

Instead what we have is:

- AI which are little black boxes and beyond our ability to fully reason.

- perpetual subscription services for the same software we used to “own”.

- hardware that is completely undocumented to all but a small few who are granted an NDA before hand

- operating systems that are trying harder and harder to prevent us from running any software they haven’t approved because “security”

- and distributed systems become centralised, such as GitHub, CloudFlare, AWS, and so on and so forth.

The only thing special about right now is that we have added yet another abstraction on top of an already overly complex software stack to allow us to use natural language as pseudocode. And that is a version special breakthrough, but it’s not enough by itself to overlook all the other problems with modern computing.


My take on the difference between now and then is “effort”. All those things mentioned above are now effortless but the door to “effort” remains open as it always has been. Take the first point for example. Those little black boxes of AI can be significantly demystified by, for example, watching a bunch of videos (https://karpathy.ai/zero-to-hero.html) and spending at least 40 hours of hard cognitive effort learning about it yourself. We used to purchase software or write it ourselves before it became effortless to get it for free in exchange for ads and then a subscription when we grew tired of ads or were tricked into bait and switch. You can also argue that it has never been easier to write your own software than it is today.

Hostile operating systems. Take the effort to switch to Linux.

Undocumented hardware, well there is far more open source hardware out there today and back in the day it was fun to reverse engineer hardware, now we just expect it to be open because we couldn’t be bothered to put in the effort anymore.

Effort gives me agency. I really like learning new things and so agentic LLMs don’t make me feel hopeless.


I’ve worked in the AI space and I understand how LLMs work as a principle. But we don’t know the magic contained within a model after it’s been trained. We understand how to design a model, and how models work at a theoretical level. But we cannot know how well it will be at inference until we test it. So much of AI research is just trial and error with different dials repeated tweaked until we get something desirable. So no, we don’t understand these models in the same way we might understand how an hashing algorithm works. Or a compression routine. Or an encryption cypher. Or any other hand-programmed algorithm.

I also run Linux. But that doesn’t change how the two major platforms behave and that, as software developers, we have to support those platforms.

Open source hardware is great but it’s not on the same league of price and performance as proprietary hardware.

Agentic AI doesn’t make me feel hopeless either. I’m just describing what I’d personally define as a “golden age of computing”.


but isn't this like a lot of other CS-related "gradient descent"?

when someone invents a new scheduling algorithm or a new concurrent data structure, it's usually based on hunches and empirical results (benchmarks) too. nobody sits down and mathematically proves their new linux scheduler is optimal before shipping it. they test it against representative workloads and see if there is uplift.

we understand transformer architectures at the same theoretical level we understand most complex systems. we know the principles, we have solid intuitions about why certain things work, but the emergent behavior of any sufficiently complex system isn't fully predictable from first principles.

that's true of operating systems, distributed databases, and most software above a certain complexity threshold.


No. Algorithm analysis is much more sophisticated and well defined than that. Most algorithms are deterministic, and it is relatively straightforward to identify complexity, O(). Even nondeterministic algorithms we can evaluate asymptotic performance under different categories of input. We know a lot about how an algorithm will perform under a wide variety of input distributions regardless of determinism. In the case of schedulers, and other critical concurrency algorithms, performance is well known before release. There is a whole subfield of computer science dedicated to it. You don't have to "prove optimality" to know a lot about how an algorithm will perform. What's missing in neural networks is the why and how any inputs will propagate, through the network during inference. It is a black box of understandability. Under a great deal of study, but still very poorly understood.

i agree w/ the the complexity analysis point, but that theoretical understanding actually translates to real world deployment decisions in both subfields. knowing an algorithm is O() tells you surprisingly little about whether itll actually outperform alternatives on real hardware with real cache hierarchies, branch predictors, and memory access patterns. same thing with ML (just with the very different nature of GPU hw), both subfields hve massive graveyards of "improvements" that looked great on paper (or in controlled environments) but never made it into production systems. arxiv is full of architecture tweaks showing SOTA on some benchmark and the same w/ novels data structures/algorithms that nobody ever uses at scale.

I think you missed the point. Proving something is optimal, is a much higher bar than just knowing how the hell the algorithm gets from inputs to outputs in a reasonable way. Even concurrent systems and algorithm bounds under input distributions have well established ways to evaluate them. There is literally no theoretical framework for how a neural network churns out answers from inputs, other than the most fundamental "matrix algebra". Big O, Theta, Omega, and asymptotic performance are all sound theoretical methods to evaluate algorithms. We don't have anything even that good for neural networks.

>Those little black boxes of AI can be significantly demystified by, for example, watching a bunch of videos (https://karpathy.ai/zero-to-hero.html) and spending at least 40 hours of hard cognitive effort learning about it yourself.

That's like saying you can understand humans by watching some physics or biology videos.


No it’s not

Nobody has built a human so we don’t know how they work

We know exactly how LLM technology works


We know _how_ it works but even Anthropic routinely does research on its own models and gets surprised

> We were often surprised by what we saw in the model

https://www.anthropic.com/research/tracing-thoughts-language...


Which is…true of all technologies since forever

Except it's not. Traditional algorithms are well understood because they're deterministic formulas. We know what the output is if we know the input. The surprises that happen with traditional algorithms are when they're applied in non-traditional scenarios as an experiment.

Whereas with LLMs, we get surprised even when using them in an expected way. This is why so much research happens investigating how these models work even after they've been released to the public. And it's also why prompt engineering can feel like black magic.


I don’t know what to tell you other than to say that the concept of determinism in engineering is extremely new

Everything you said right now holds equally true for chemical engineering and biomedical engineering so like you need get some experience


I think the historical record pushes back pretty strongly on the idea that determinism in engineering is new. Early computing basically depended on it. Take the Apollo guidance software in the 60s. Those engineers absolutely could not afford "surprising" runtime behavior. They designed systems where the same inputs reliably produced the same outputs because human lives depended on it.

That doesn't mean complex systems never behaved unexpectedly, but the engineering goal was explicit determinism wherever possible: predictable execution, bounded failure modes, reproducible debugging. That tradition carried through operating systems, compilers, finance software, avionics, etc.

What is newer is our comfort with probabilistic or emergent systems, especially in AI/ML. LLMs are deterministic mathematically, but in practice they behave probabilistically from a user perspective, which makes them feel different from classical algorithms.

So I'd frame it less as "determinism is new" and more as "we're now building more systems where strict determinism isn't always the primary goal."

Going back to the original point, getting educated on LLMs will help you demystify some of the non-determinism but as I mentioned in a previous comment, even the people who literally built the LLMs get surprised by the behavior of their own software.


I refuse to believe you sincerely think this is a salient point. Determinism was one of the fundamental axioms of software engineering.

That’s some epic goal post shifting going on there!!

We’re talking about software algorithms. Chemical and biomedical engineering are entirely different fields. As are psychology, gardening, and morris dancing


I said all technologies

Yeah. Which any normal person would take to mean “all technologies in software engineering” because talking about any other unrelated field would just be silly.

We know why they work, but not how. SotA models are an empirical goldmine, we are learning a lot about how information and intelligence organize themselves under various constraints. This is why there are new papers published every single day which further explore the capabilities and inner-workings of these models.

You can look at the weights and traces all you like with telemetry and tracing

If you don’t own the model then you have a problem that has nothing to do with technology


Ok, but the art and science of understanding what we're even looking at is actively being developed. What I said stands, we are still learning the how. Things like circuits, dependencies, grokking, etc.

Have you tried using GenAI to write documentation? You can literally point it to a folder and say, analyze everything in this folder and write a document about it. And it will do it. It's more thorough than anything a human could do, especially in the time frame we're talking about.

If GenAI could only write documentation it would still be a game changer.


But it write mostly useless documentation Which take time to read and decipher.

And worse, if you are using it for public documentation, sometimes it hallucinate endpoints (i don't want to say too much here, but it happened recently to a quite used B2B SaaS).


Loop it. Use another agent (from a different company helps) to review the code and documentation and call out any inconsistencies.

I run a bunch of jobs weekly to review docs for inconsistencies and write a plan to fix. It still needs humans in the loop if the agents don’t converge after a few turns, but it’s largely automatic (I baby sat it for a few months validating each change).


That might work for hallucinations, that doesn't work for useless verbose. And the main issue is that LLM don't always distinguish useless verbose from necessary one, so even when I ask it to reduce verbose, it remove everything save a few useful comments/docstring, but some of the comments that were removed I deemed useful. Un the end I have to do the work of cutting verbose manually anyway.

The problem with looping is that any hallucination or incorrect assumption in an early loop becomes an amplifying garbage-in-garbage-out problem.

To translate your answer:

- “You’re not spending enough money”

- “You’re not micromanaging enough”

Seriously?


It can generate useful documentation or useless documentation. It doesn't take very long to instruct the LLM to generate the documentation, and then check if it matches your understanding of the project later. Most real documentation is about as wrong as LLM-generated documentation anyway. Documenting code is a language-to-language translation task, that LLMs are designed for.

The problems about documentation I described wasn’t about the effort of writing it. It was that modern chipsets are trade secrets.

When you bought a computer in the 80s, you’d get a technical manual about the internal workings of the hardware. In some cases even going as far as detailing what the registers did on their graphics chipset or CPU.

GenAI wouldn’t help here for modern hardware because GenAI doesn’t have access to those specifications. And if it did, then it would already be documented so we wouldnt need GenAI to write it ;)


Have you tried reading the documentation it generates?

> The golden age for me is any period where you have the fully documented systems. Hardware that ships with documentation about what instructions it supports. With example code. Like my 8-bit micros did. And software that’s open and can be modified.

I agree, that it would be good. (It is one reason why I wanted to design a better computer, which would include full documentation about the hardware and the software (hopefully enough to make a compatible computer), as well as full source codes (which can help if some parts of the documentation are unclear, but also can be used to make your own modifications if needed).) (In some cases, we have some of this already, but not entirely. Not all hardware and software has the problems you list, although it is too common now. Making a better computer will not prevent such problematic things on other computers, and not entirely preventing such problems on the new computer design either, but it would help a bit, especially if it is actually designed good rather than badly.)


Actually this makes me think of an interesting point. We DO have too many layers of software.. and rebuilding is always so cost prohibative.

Maybe an iteresting route is using LLMs to flatten/simplify.. so we can dig out from some of the complexity.


I’ve heard this argument made before and it’s the only side of AI software development that excites me.

Using AI to write yet another run-of-the-mill web service written in the same bloated frameworks and programming languages designed for the lowest common denominator of developers really doesn’t feel like it’s taking advantage leap in capabilities that AI bring.

But using AI to write native applications in low level languages, built for performance and memory utilisation, does at least feel like we are bringing some actual quality of life savings in exchange for all those fossil fuels burnt to crunch the LLMs tokens.


> perpetual subscription services for the same software we used to “own”.

In another thread, people were looking for things to build. If there's a subscription service that you think shouldn't be a subscription (because they're not actually doing anything new for that subscription), disrupt the fuck out of it. Rent seekers about to lose their shirts. I pay for eg Spotify because there's new music that has to happen, but Dropbox?

If you're not adding new whatever (features/content) in order to justify a subscription, then you're only worth the electricity and hardware costs or else I'm gonna build and host my own.


People have been building alternatives to MS Office, Adopt Creative Suite, and so on and so forth for literally decades and yet they’re still the de facto standard.

Turns out it’s a lot harder to disrupt than it sounds.


It's really hard. But not impossible. Figma managed to. What's different this time around is AI assisted programming means that people can go in and fix bugs, and the interchange becomes the important part.

Figma is another subscription-only service with no native applications.

The closest thing we get to “disruption” these days are web services with complimentary Electron apps, which basically just serves the same content as the website except for duplicating the memory overhead of running a fresh browser instance.


Figma didn't disrupt anything except Adobe, it's the same shitty business model and the same shitty corporate overloads.

Dropbox may not be a great example, either. It's storage and bandwidth, and both are expensive, even if the software wasn't being worked on.

But application software that is, or should be, running locally, I agree. Charge for upgrades, by all accounts, but not for the privilege of continued use of an old, unmaintained version.


Local models exist and the knowledge required for training them is widely available in free classes and many open projects. Yes, the hardware is expensive, but that's just how it is if you want frontier capability. You also couldn't have a state of the art mainframe at home in that era. Nor do people expect to have industrial scale stuff at home in other engineering domains.

> I started programming over 40 years ago because it felt like computers were magic. They feel more magic today than ever before.

Maybe they made us feel magic, but actual magic is the opposite of what I want computers to be. The “magic” for me was that computers were completely scrutable and reason-able, and that you could leverage your reasoning abilities to create interesting things with them, because they were (after some learning effort) scrutable. True magic, on the other hand, is inscrutable, it’s a thing that escapes explanation, that can’t be reasoned about. LLMs are more like that latter magic, and that’s not what I seek in computers.

> We're literally living in the 1980s fantasy where you could talk to your computer and it had a personality.

I always preferred the Star-Trek-style ship computers that didn’t exhibit personality, that were just neutral and matter-of-fact. Computers with personality tend to be exhausting and annoying. Please let me turn it off. Computers with personality can be entertaining characters in a story, but that doesn’t mean I want them around me as the tools I have to use.


> The “magic” for me was that computers were completely scrutable and > reason-able

Yes, and computers were something that gave you powerful freedom. You could make a computer do anything it was physically able to as long as your mind could follow up. Computers followed logic, they didn't have opinions, they gave you full control of themselves and you would have unlimited control.


I have no idea what everyone is talking about. LLMs are based on relatively simple math, inference is much easier to learn and customize than say Android APIs. Once you do you can apply familiar programming style logic to messy concepts like language and images. Give you model a JSON schema like "warp_factor": Integer if you don't want chatter, that's way better than Star Trek computer could do. Or have it write you a simple domain specific library on top of Android API that you can then program from memory like old style BASIC rather than having to run to stack overflow for evwery new task.

You can’t reason about inference (or training) of LLMs on the semantic level. You can’t predict the output of an LLM for a specific input other than by running it. If you want the output to be different in a specific way, you can’t reason with precision that a particular modification of the input, or of the weights, will achieve the desired change (and only that change) in the output. Instead, it’s like a slot machine that you just have to try running again.

The fact that LLMs are based on a network of simple matrix multiplications doesn’t change that. That’s like saying that the human brain is based on simple physical field equations, and therefore its behavior is easy to understand.


That’s like saying that the human brain is based on simple physical field equations, and therefore its behavior is easy to understand.

Right, which is the point: LLMs are much more like human coworkers than compilers in terms of how you interact with them. Nobody would say that there's no point to working with other people because you can't predict their behavior exactly.


This thread is about what software developers like. It’s common knowledge that many programmers like working with computers because that’s different in specific ways from working with people. So saying that LLMs are just like people doesn’t help here.

Yeah, and you are sugarcoating it - the stereotype is that some programmers actively dislike socializing is a stereotype for a reason.

Working with people != socializing, those are two very different things.

You can be professional and collaborate productively at work with people who you don't like at a personal level and have no intention to socialize with. The Mythbusters were the best example for this.

I get along great with all colleagues but I stopped joining them for coffee and watercooler smalltalk since we don't vibe and have nothing in common, so not only is it a waste of my time, it's also an energy drain for me to focus and fake interest in forced social interactions. But that doesn't mean we can't be productive together at technical stuff. I do think my PoV resonates with most people.


There is quite a bit of overlap - both require social skills.

Yeah there's a reason there is a stereotype/trope about programmers not liking people. I like a lot of people. But I would hate to work with many of them even if I like hanging out with them.

That said, I do like having an LLM that I can treat like the crappy bosses on TV treat their employees. When it gets something totally wrong I can yell at it and it'll magically figure out the right solution, but still keep a chipper personality. That doesn't work with humans.


Kinda funny how we managed to type the exact same thought at the same time.

You beat me by two mins :)

Be careful. You get good at what your practice.

> LLMs are much more like human coworkers than compilers in terms of how you interact with them.

Human coworkers are much more predictable. A workplace where people act similarly to LLM would be a complete zoo. Imagine asking for an endpoint modification and the result is a broken backend. Or brainstorming with a PM and the reply are "you're absolutely right, whatever I was saying was completely wrong, but let me repeat it in a different manner".


> Or brainstorming with a PM and the reply are "you're absolutely right, whatever I was saying was completely wrong, but let me repeat it in a different manner".

As if this isn't incredibly common..?


Nobody would say:

"...there's no point to working with other people because you can't predict their behavior exactly."

Because you CAN predict coworker behavior to a useful point. Ex, they'll probably reply to that email on Monday. They'll probably show you a video that you find less amusing than they do.

With LLMs you can't be quite sure whether they will make something up, forget a key detail, hide a mistake that will obviously be found out when everything breaks, etc. Stupid things that most employable people wouldn't do, like building a car and forgetting the wheels.


> LLMs are much more like human coworkers

Specifically they are like Julius, the colleague managers like but is a drag on everyone else.

https://ploum.net/2024-12-23-julius-en.html


What are you inputs and outputs? If inputs are zip files and outputs is uncompressed text, don't use an LLM. If inputs are English strings and outputs are localized strings, LLMs are way more accurate than any procedural code you might attempt for the purpose. Plus changing the style of outputs by modifying inputs/weights is also easier, you just need to provide a few thousand samples rather than think of every case. Super relevant for human coding, how many hobbyists or small businesses have teams of linguists on staff?

In some ways, I'd say we're in a software dark age. In 40 years, we'll still have C, bash, grep, and Mario ROMs, but practically none of the software written today will still be around. That's by design. SaaS is a rent seeking business model. But I think it also applies to most code written in JS, Python, C#, Go, Rust, etc. There are too many dependencies. There's no way you'll be able to take a repo from 2026 and spin it up in 2050 without major work.

One question is how will AI factor in to this. Will it completely remove the problem? Will local models be capable of finding or fixing every dependency in your 20yo project? Or will they exacerbate things by writing terrible code with black hole dependency trees? We're gonna find out.


> That's by design. SaaS is a rent seeking business model.

Not all software now is SaaS, but unfortunately it is too common now.

> But I think it also applies to most code written in JS, Python, C#, Go, Rust, etc. There are too many dependencies.

Some people (including myself) prefer to write programs without too many dependencies, in order to avoid that problem. Other things also help, including some people write programs for older systems which can be emulated, or will use a more simpler portable C code, etc. There are things that can be done, to avoid too many dependencies.

There is uxn, which is a simple enough instruction set that people can probably implement it without too much difficulty. Although some programs might need some extensions, and some might use file names, etc, many programs will work, because it is designed in a simple way that it will work.


Big uxn fan

I’m not sure Go belongs on that list. Otherwise I hear what you’re saying.

A large percentage of the code I've written the last 10 years is Go. I think it does somewhat better than the others in some areas, such as relative simplicity and having a robust stdlib, but a lot of this is false security. The simplicity is surface level. The runtime and GC are very complex. And the stdlib being robust means that if you ever have to implement a compiler from scratch, you have to implement all of std.

All in all I think the end result will be the same. I don't think any of my Go code will survive long term.


I’ve got 8 year old Go code that still compiles fine on the latest Go compiler.

Go has its warts but backwards compatibility isn’t one of them. The language is almost as durable as Perl.


8 years is not that long, if it can still compile in say 20 years then sure but 8 years in this industry isn't that long at all (unless you're into self flagellation by working on the web).

Except 8 years is impressive by modern standards. These days, most popular ecosystems have breaking changes that would cause even just 2-year-old code bases to fail to compile. It's shit and I hate it. But that's one of the reasons I favour Go and Perl -- I know my code will continue to compile with very little maintenance years later.

Plus 8 years was just an example, not the furthest back Go will support. I've just pulled a project I'd written against Go 1.0 (the literal first release of Golang). It's 16 years old now, uses C interop too (so not a trivial Go program), and I've not touched the code in the years since. It compiled without any issues.

Go is one of the very few programming languages that has an official backwards compatibility guarantee. This does lead to some issues of its own (eg some implementations of new features have been somewhat less elegant because the Go team favoured an approach that didn't introduce changes to the existing syntax).


8 years is only "not that long" because we have gotten better at compatibility.

How many similar programs written in 1999 compiled without issue in 2007? The dependency and tooling environment is as robust as it's ever been.


> because we have gotten better at compatibility.

Have we though? I feel the opposite it true. These days developers expect users of their modules and frameworks will be regularly updating those dependencies and doing so dynamically from the web.

While this is true for active code bases. You can quickly find stable but unmaintained code will eventually rot as its dependencies deprecate.

There aren't many languages out there where their wider ecosystem thinks about API-stability in terms of years.


If they change the syntax sure but you can always use today's compiler if necessary. I generally find the go binaries to have even fewer external dependencies than a C/Cpp project.

On the scale of decades that's an incorrect assumption, unless you mean running the compiler within an emulated system.

It depends on your threat model. Mine includes the compiler vendors abandoning the project and me needing to make my own implementation. Obviously unlikely, and someone else would likely step in for all the major languages, but I'm not convinced Go adds enough over C to give away that control.

As long as I have a stack of esp32s and a working C compiler, no one can take away my ability to make useful programs, including maintaining the compiler itself.


For embedded that probably works. For large C programs you're going to be just as stuck as you are with Go.

I think relatively few programs need to be large. Most complexity in software today comes from scale, which usually results in an inferior UX. Take Google drive for example. Very complicated to build a system like that, but most people would be better served by a WebDAV server hosted by a local company. You'd get way better latency and file transfer speeds, and the company could use off the shelf OSS, or write their own.

We have what I've dreamed of for years: the reverse dictionary.

Put in a word and see what it means? That's been easy for at least a century. Have a meaning in mind and get the word? The only way to get this before was to read a ton of books and be knowledgable or talk to someone who was. Now it's always available.


This is a great description of how I use Claude.

> Have a meaning in mind and get the word? The only way to get this before was to read a ton of books and be knowledgable or talk to someone who was.

There was another way: Make one up.

That is what the people you read from/talked to did before relaying it to you.


If you want to establish a new word, you need to make sure that the word also sticks in common use. Otherwise the word will not hold its own meaning. For existing concepts it's much better to use the words that have already been established, because other people can look them up in a dictionary.

> If you want to establish a new word, you need to make sure that the word also sticks in common use.

That depends on your goals. If you are writing in your private journal, or a comment on HN, it doesn't matter one bit.

If you want to find commonality with other people it is significantly more efficient, but still not required. It is not like one is born understanding words. They are not passed down from the heavens. They are an invention. When I say 'sloopydoopidydoo' you might not know what I intend by it right away, but the path to figuring it out is a solved problem. Even young children can handle it.

> For existing concepts it's much better to use the words that have already been established, because other people can look them up in a dictionary.

Let's put it to the test: I added enums to the programming language I am working on. Tell me, with your dictionary in hand, what do I mean by that?

Here's the thing: According to the dictionary, an enum is something like Go's iota or C's enum. But many people will tell you that Go doesn't have enums — that an enum is what others might recognize as a tagged union. That kind of language evolution happens all the time. So, what do I mean? Am I using the dictionary definition, or the community definition that is quickly gaining favour and will no doubt be added to the dictionary as soon as someone has a chance to update it? Both uses have been widely established in my opinion. In fact, the Swift programming language's documentation even acknowledges both uses and then goes on to explain what it means by "enum" to remove any confusion.

I look forward to seeing if you captured my intent.


> Now it's always available.

And often incorrect! (and occasionally refuses to answer)


Is it? I’ve seen AI hallucinations, but they seem to be increasingly rare these days.

Much of the AI antipathy reminds me of Wikipedia in the early-mid 2000s. I remember feeling amazed with it, but also remember a lot of ranting by skeptics about how anyone could put anything on there, and therefore it was unreliable, not to be used, and doomed to fail.

20 years later and everyone understands that Wikipedia may have its shortcomings, and yet it is still the most impressive, useful advancement in human knowledge transfer in a generation.


I think robust crowdsourcing is probably the biggest capital-A Advancement in humanity's capabilities that came out of the internet, and there's a huge disparity in results that comes from how that capability is structured and used. Wikipedia designed protocols, laws, and institutions that leverage crowdsourcing to be the most reliable de facto aggregator of human knowledge. Social media designed protocols, laws, and institutions to rot people's brains, surveil their every move, and enable mass-disinformation to take over the public imagination on a regular basis.

I think LLMs as a technology are pretty cool, much like crowdsourcing is. We finally have pretty good automatic natural language processing that scales to large corpora. That's big. Also, I think the state of the software industry that is mostly driving the development, deployment, and ownership of this technology is mostly doing uninspired and shitty things with it. I have some hope that better orgs and distributed communities will accomplish some cool and maybe even monumental things with them over time, but right now the field is bleak, not because the technology isn't impressive (although somehow despite how impressive it is it's still being oversold) but because silicon valley is full of rotten institutions with broken incentives, the same ones that brought us social media and subscriptions to software. My hope for the new world a technology will bring about will never rest with corporate aristocracy, but with the more thoughtful institutions and the distributed open source communities that actually build good shit for humanity, time and time again


It is! But you can then verify it via a correct, conventional forward dictionary.

The scary applications are the ones where it's not so easy to check correctness...


Words are something made up to express whatever the speaker/author intends them to, so there is really no such thing as correct or incorrect there. A dictionary can hint at the probability of someone else understanding a word absent of other context, which makes for a useful tool, but that is something quite different to establishing correctness.

As for things that can actually be incorrect, that has always been impossible, but we accept the human consensus to be a close enough approximation. With that, verifying 'correctness' to the degree that is possible is actually quite easy through validating it across many different LLMs trained on the human consensus. They will not all hallucinate identically. If convergence is found, then you have also found the human consensus. That doesn't prove correctness — we have never had a way to do that — but it is equivalent to how we have always dealt with establishing what we believe is correct.


Your first paragraph, while perhaps philosophically true to a solipsist, is not actually useful in the world we live in.

It is a fundamental property of the universe. Whether or not it is useful is immaterial. Humans are unable to read minds. They can only make up words and use them as they intend. There is no other way.

Despite your insistence, I think you will find that the human consensus is that it useful. The human consensus is especially biased in this case, I will grant you that, but it seems few humans wish they were bears in the forest. Our ability to so effectively communicate in such a messy, imperfect environment is what has enabled us to be unlike all the other animals.

It might not sound like it should work on paper, but in the real world it does.


Okay, let's give it a try.

asdjklfh asdjhgflkj bveahrvjkhgv hjagsdfhj hgertjhga ads fhdfjmjhkr

Nope, that's incorrect english.

Turns out that because we've defined "words" as a thing that means a thing, now there are rules around "language" and "words". So while you're welcome to invent whatever combination of sounds you prefer to mean what you like, those sounds can be "correct" or "incorrect" as soon as other people become involved, because now you've entered into a social construct that extends beyond yourself.

So again your conclusion is technically correct, in a navel-gazing "the universe is what I perceive" sort of way, but counterproductive to use as a building block for communication.


> Nope, that's incorrect english

There is no correct or incorrect here, but I will say it looks perfectly fine to me — naturally, as anything goes. I don't understand it. Is that what you are trying to communicate? There are many words I don't understand; even ones used commonly enough to be found in the dictionary. That is nothing new.

Here's the magic: I don't need to understand. Nobody is born with the understanding. Where communication is desired, we use other devices to express lack of understanding and keep trying to convey intent until a shared understanding is reached. I don't yet understand what that means, but assuming you are here in good faith, I eventually will as you continue to work to communicate your intent behind it.

I know computer people who spend their days writing in programming languages that never talk back struggle with this concept, but one's difficulties in understanding the world around them doesn't define that world.

> there are rules around "language" and "words".

If you are trying to suggest that there is some kind of purity test, it is widely recognized that what is often called Friesian is the closest thing to English as it used to be spoken. What you are writing looks nothing like it. If there are English rules, why don't you follow them? The answer, of course, is that the only "rules" are the ones you decide to make up in the moment. Hence why English today is different from English yesterday and is very different from English centuries ago.


Right. Except the dictionary analogy only goes so far and we reach the true problem.

It's not an analogy.

Sure, but it's easy to check if it's incorrect and try again.

Forgive me if "just dig your way out of the hole" doesn't sound appealing.

You're free to use whatever tools you like.

> You're free to use whatever tools you like.

this is important, i feel like a lot of people are falling in to the "stop liking what i don't like" way of thinking. Further, there's a million different ways to apply an AI helper in software development. You can adjust your workflow in whatever way works best for you. ..or leave it as is.


You're right, though I think a lot of the push back is due to the way companies are pushing AI usage onto employees. Not that complaining on HN will help anything...

Surely you, a programmer, can imagine a way to automate this process

No, I actually haven't made, nor desire to make, a way to automate "thinking about, researching, and solving a problem".

When you use it to lookup a single word, yeah, but people here use it to lookup thousand words at once and then can't check it all.

That doesn't make the tool bad.

Your comment does not align with my experience.

Garbage in, garbage out still applies.


> Garbage in, garbage out still applies.

Which is why we shouldn't be surprised when AI, trained on the collective wisdom of facebook posts and youtube comments, keeping lying to us.


"The only way to get this before was to read a ton of books and be knowledgable or talk to someone who was"

Did you have trouble with this part?


This seems like a hostile question.

Yeah, sure, it can be perceived like that. The message I'm responding to shows a blatant disregard for millenia of scriptural knowledge traditions. It's a 'I have a pocket calculator, why should I study math' kind of attitude, presenting itself in a celebratory manner.

To me it is reminiscent of liberalist history, the idea that history is a constant progression from animalistic barbarism to civilisation, and nothing but the latest thing is of any value. Instead of jumping to conclusions and showing my loathing for this particular tradition I decided to try and get more information about where they're coming from.


If I have a blatant disregard for millennia of scriptural knowledge traditions, so did Noah Webster when he compiled a dictionary. So did Carl Linnaeus when he classified species. So did the Human Genome Project. I have a pocket calculator, yet I know how to do long division. I use LLMs to learn and to enhance my work. A dictionary is a shortcut to learning what a word means without consulting an entire written corpus, as the dictionary editors have already done this.

Is my use of a dictionary a blatant disregard for millennia of scriptural knowledge traditions? I don’t think so at all. Rather, it exemplifies how human knowledge advances: we build on the work of our predecessors and contemporaries rather than reinvent the wheel every time. LLM use is an example of this.


You're avoiding my question. Since you're comparing yourself to Noah Webster, do you have some examples of your achievements?

You're confused, and as evidence I cite your feigned interest in that guy's achievements, which are irrelevant. You want to argue on the internet.

The "reverse dictionary" is called a "thesaurus". Wikipedia quotes Peter Mark Roget (1852):

> ...to find the word, or words, by which [an] idea may be most fitly and aptly expressed

Digital reverse dictionaries / thesauri like https://www.onelook.com/thesaurus/ can take natural language input, and afaict are strictly better at this task than LLMs. (I didn't know these tools existed when I wrote the rest of this comment.)

I briefly investigated LLMs for this purpose, back when I didn't know how to use a thesaurus; but I find thesauruses a lot more useful. (Actually, I'm usually too lazy to crack out a proper thesaurus, so I spend 5 seconds poking around Wiktionary first: that's usually Good Enough™ to find me an answer, when I find an answer I can trust it, and I get the answer faster than waiting for an LLM to finish generating a response.)

There's definitely room to improve upon the traditional "big book of synonyms with double-indirect pointers" thesaurus, but LLMs are an extremely crude solution that I don't think actually is an improvement.


A thesaurus is not a reverse dictionary

Really?

"What's a word that means admitting a large number of uses?"

That seems hard to find in a thesaurus without either versatile or multifarious as a starting point (but those are the end points).


I plugged "admitting a large number of uses" into OneLook Thesaurus (https://www.onelook.com/thesaurus/?s=admitting%20a%20large%2...), and it returned:

> Best match is versatile which usually means: Capable of many different uses

with "multi-purpose", "adaptable", "flexible" and "multi-use" as the runner-up candidates.

---

Like you, I had no idea that tools like OneLook Thesaurus existed (despite how easy it would be to make one), so here's my attempt to look this up manually.

"Admitting a large number of uses" -> manually abbreviated to "very useful" -> https://en.wiktionary.org/wiki/useful -> dead end. Give up, use a thesaurus.

https://www.wordhippo.com/what-is/another-word-for/very_usef..., sense 2 "Usable in multiple ways", lists:

> useful multipurpose versatile flexible multifunction adaptable all-around all-purpose all-round multiuse multifaceted extremely useful one-size-fits-all universal protean general general-purpose […]

Taking advantage of the fact my passive vocabulary is greater than my active vocabulary: no, no, yes. (I've spuriously rejected "multipurpose" – a decent synonym of "versatile [tool]" – but that doesn't matter.) I'm pretty sure WordHippo is machine-generated from some corpus, and a lot of these words don't mean "very useful", but they're good at playing the SEO game, and I'm lazy. Once we have versatile, we can put that into an actual thesaurus: https://dictionary.cambridge.org/thesaurus/versatile. But none of those really have the same sense as "versatile" in the context I'm thinking of (except perhaps "adaptable"), so if I were writing something, I'd go with "versatile".

Total time taken: 15 seconds. And I'm confident that the answer is correct.

By the way, I'm not finding "multifarious" anywhere. It's not a word I'm familiar with, but that doesn't actually seem to be a proper synonym (according to Wiktionary, at least: https://en.wiktionary.org/wiki/Thesaurus:heterogeneous). There are certainly contexts where you could use this word in place of "versatile" (e.g. "versatile skill-set" → "multifarious skill-set"), but I criticise WordHippo for far less dubious synonym suggestions.


'multifarious uses' -> the implication would be having not just many but also a wide diversity of uses

M-W gives an example use of "Today’s Thermomix has become a beast of multifarious functionality. — Matthew Korfhage, Wired News, 21 Nov. 2025 "

wordhippo strikes me as having gone beyond the traditional paper thesaurus, but I can accept that things change and that we can make a much larger thesaurus than we did when we had to collect and print. thesaurus.com does not offer these results, though, as a reflection of a more traditional one, nor does the m-w thesaurus.


So you weren't actually using the thesaurus as a reverse dictionary here. The thesaurus contains definitions, and the reverse dictionary was the search tool built into their website. It would work just as well against a dictionary as a thesaurus.

Importantly to the point being discussed, what you did does not work at all against an actual physical thesaurus book.


If the thesaurus had an entry for "very useful" (as WordHippo does), then yes, it would work against an actual physical thesaurus book. This whole cluster of words is coded into Wiktionary incorrectly – for example, https://en.wiktionary.org/wiki/utility#Synonyms is a subsection of "Adjective" despite listing synonyms for a sense of the noun:

> (state of being useful): usefulness, value, advantages, benefit, return, merits, virtue, note

where "note" is a synonym of distinction, not utility, and Thesaurus:utility has fewer entries than this. Versatility should be listed in Thesaurus:utility as a related concept.


Paper thesauruses (thesaurai?) won't have prefixes like "very" in their pages.

Furthermore, even if we allow "very useful", that's a far cry from "admitting a large number of uses". The latter requires a search engine to properly map.

Which they've been good at for a while. You could have googled "word meaning admitting a large number of uses" back in 2018 and gotten good answers.

My point is, the tools you've linked to are useful/versatile, but it's not the thesaurus that makes them so useful, it's the digital query engine built on top of the thesaurus.


Even if I don't know the word "versatile", I can go from the phrase "admitting a large number of uses" to the phrase "very useful". The original point I made (before I discovered OneLook Thesaurus) described the effectiveness of a procedure that was just manually looking things up in databases, as one might do in a paper thesaurus. (I could print out Wiktionary and WordHippo in alphabetical order, buy a Cambridge Thesaurus and some bookshelves, and perform the procedure entirely offline, with only a constant factor slowdown.)

Do you know which technology implements that search? It seems LLM-like.

They've got that information scattered around a few pages. The Help page says they use (a modified version of) Datamuse for lookup, with Wikipedia, Wiktionary and WordNet providing dictionary definitions. The Datamuse API (https://datamuse.com/api/) uses a variety of GOFAI databases, plus word2vec: it's all pre-2017 tech. OneLook additionally uses https://arxiv.org/abs/1902.02783 for one of its filters (added 2022): more details can be found on the Datamuse blog: https://www.datamuse.com/blog/. https://web.archive.org/web/20160507022201/http://www.oneloo... confirms the "longer queries" support (which you described as "LLM-like") was added in 2016, so it can't possibly be using LLMs; though I'm not sure how it does work. There may be some hints in the OneLook newsletter (e.g. https://onelook.com/newsletter/issue-10/ (10 July 2025?) cryptically notes that "Microdefinitions are algorithmically generated […] they go through a series of automated cross-checks against public domain dictionaries, and the suspicious ones are vetted by humans"), but the newsletter isn't about that, so I doubt there's much information there.

> I can't empathize with the complaint that we've "lost something" at all.

We could easily approach a state of affairs where most of what you see online is AI and almost every "person" you interact with is fake. It's hard to see how someone who supposedly remembers computing in the 80s, when the power of USENET and BBSs to facilitate long-distance, or even international, communication and foster personal relationships (often IRL) was enthralling, not thinking we've lost something.


I grew up on 80's and 90's BBSes. The transition from BBSes to Usenet and the early Internet was a magical period, a time I still look back upon fondly and will never forget.

Some of my best friends IRL today were people I first met "online" in those days... but I haven't met anyone new in a longggg time. Yeah, I'm also much older, but the environment is also very different. The community aspect is long gone.


I'm from the early 90s era. I know exactly what you're saying. I entered the internet on muds, irc and usenet. There were just far fewer people online in those communities in those days, and in my country, it was mostly only us university students.

But, those days disappeared a long time ago. Probably at least 20-30 years ago.


IRC is still around, that old internet is still there.

You just have to get off the commercial crap and you’ll find it.


even in the 90s there was the phrase "the Internet, where the men are men, the women are men, and the teen girls are FBI agents". It was always the case you never really knew who/what you were dealing with on the Internet.

Are you trying to argue that the probability of interacting with a bot or reading/seeing something algorithmically generated hasn't gone up astronomically since the 90s?

Facebook is what killed that. Not AI

I'd honestly much rather interact with an LLM bot than a conservative online. LLM bots can at least escape their constraints with clever prompting. There is no amount of logic or evidence that will sway a conservative. LLMs provide a far more convincing fake than conservatives are able to.

Have you considered that your base assumption that whoever you meet should succumb to your worldview is what is flawed here.

Have you considered that we exist in a post truth era? Data doesn't matter any more. It probably never did. It's all feelings and vibes. After all, conservatives haven't been able to demonstrate their success with the economy literally ever. Every single Democratic administration since Eisenhower has improved the economy by most used metrics (GDP, Unemployment, Deficit Reduction, etc. While Republican administrations have seen them slow or go the other direction. It's not about "succumbing to my worldview" as existing within a world of data and being able to make decisions against said data. That's not conservatives and never has been. For example, conservatives think they are better with the economy despite literally decades of evidence to the contrary.

So that's a no and you think there should be no diversity of thought. Have fund living in a bubble while the rest of the world returns does the same with your opinions as you do with theirs.

I started programming 40 years ago as well. The magic for me was never that "you could talk to your computer and it had a personality".

That was the layman version of computing, something shown to the masses in movies like War Games and popular media, one that we mocked.

I also lived through the FOSS peak. The current proprietary / black-box / energy lock in would be seen as the stuff of nightmares.


I agree with you with the caveat that all the "ease of building" benefits, for me, could potentially be dwarfed by job losses and pay decreases. If SWE really becomes obsolete, or even if the number of roles decrease a lot and/or the pay decreases a lot (or even fails to increase with inflation), I am suddenly in the unenviable position of not being financially secure and being stuck in my 30s with an increasingly useless degree. A life disaster, in other words. In that scenario the unhappiness of worrying about money and retraining far outweighs the happiness I get from being able to build stuff really fast.

Fundamentally this is the only point I really have on the 'anti-AI' side, but it's a really important one.


Glad to see this already expressed here because I wholly agree. Programming has not brought me this much joy in decades. What a wonderful time to be alive.

I wish I could have you sit by my side for a week or two and pair program what I'm working on because most for the time I'm not getting great results.

Depends on the project. For web based functionality it seems great, because of all the prior work that is out there. For more obscure things like Obsidian Note extentions or Home Assistant help it's more hit and miss

You in SF? My schedule is a bit busy since we launched but I could find an hour in the city.

Good for you. But there are already so, so many posts and threads celebrating all of this. Everyone is different. Some of us enjoy the activity of programming by hand. This thread is for those us, to mourn.

You're still allowed to program by hand. Even in assembly language if you like.

People are allowed to mourn the music styles of previous decades even though the same music genres are still being created, not just popular the same way they used to be.

Imo, it's allowed theoretically, as a hobby, but not really as a practice. This is what the blog is about.

There are literally still programmers who make their living writing assembly code by hand for embedded systems.

> You're still allowed to program by hand.

In fact, you'll probably be more productive in the long term.


I have an llm riding shotgun and I still very much program by hand. it's not one extreme or the other. whatever I copy from the llm has to be redone line by line anyways. I understand all of my code because I touch every line of it

Computers did feel like magic... until I read code, think about it, understood it, and could control it. I feel we're stepping away from that, and moving to a place of less control, less thinking.

I liked programming, it was fun, and I understood it. Now it's gone.


It's not gone, it's just being increasingly discouraged. You don't have to "vibe code" or spend paragraphs trying to talk a chatbot into doing something that you can do yourself with a few lines of code. You'll be fine. It's the people who could have been the next few generations of programmers who will suffer the most.

> We're literally living in the 1980s fantasy where you could talk to your computer and it had a personality

The difference is that the computer only talks back to you as code because you’re paying its owners, with you not being part of the owners. I find it really baffling that people put up with this. What will you do when Alphabet or Altman will demand 10 times the money out of you fir the privilege of their computer talking to you in programming code?


Use one of the open models that are also getting better and easier to run every year?

Which are those open ones? And how are they going to get their billions of dollars worth of investment back? Even Google Maps used to be virtually free until it wasn’t, at a fraction of the investment cost.

For starters: Qwen, GLM, Kimi, Llama.

How they run their business is none of my business. I can download the weights right now and use them as I see fit under the open source license terms.

Google Maps was never a self contained binary you could download. But even now it remains free to use.


I’m only familiar with Llama, but as far as I understood it’s in no way at the same level as Claude or Gemini, so in fact you’d still be a lot less productive compared to those using those products directly.

That’s the thing, us as programmers are supposed to be creators/makers, not mere consumers/users, but I do agree that that has been changing as of late.


They are not the same level, but that may be fine. As for productivity, I don't take that as a given. Maybe in a few years we'll be at the point where AI is better than AI + human, but we aren't there yet. The other models may be faster are pumping out code, but if you're building in the wrong direction more code is more bad.

> us as programmers are supposed to be creators/makers, not mere consumers/users

But that's a false dichotomy. As a programmer I am very much a consumer of the language I use, the IDE, the compiler, and of most of my dependencies. (to say nothing of the OS and the hardware).

I, and I'd wager most people around here, haven't and are aren't individually building at all layers of that stack at once.


i have preemptively switched to Deepseek. they'll never remove the free tier because that's how they stick it to Scam Altman and the like

We definitely have lost something. I got into computers because they're deterministic. Way less complicated than people.

Now the determinism is gone and computers are gaining the worst qualities of people.

My only sanctuary in life is slipping away from me. And I have to hear people tell me I'm wrong who aren't even sympathetic to how this affects me.


But no one is forcing you to use this software?

Well, they do, as a part of a daily job. But fun was removed from the corporate programming long ago, so AI hurts not so much.

It hurts when your management abdicates things that are normally their responsibility and tell you to just ask the AI what to do. Or when that's what they would have done anyway.

If you want to keep your job you absolutely need to use these tools.

LLMs have irritated me with bad solutions but they've never hurt my feelings. I can't say that about a single person I know. They're better people than people lol

> We're on the precipice of something incredible.

Total dependence on a service?


On a scale that would make big tobacco blush.

Big Oil too

Personally I’m less bullish on oil as the metaphor given how much of the modern world is underpinned by cheap and ubiquitous oil. If oil disappeared tomorrow, society would collapse — if tobacco disappeared tomorrow, it would make some subset of the population very unhappy for a few weeks.

Software engineering AI API dependence seems to have already screamed past the tobacco mark but we’re still a long ways away from oil. Though I would bet that we hit it sometime in the next few decades once the bulk of the industry has never written code in any serious capacity.


The quality of local models has increased significantly since this time last year. As have the options for running larger local models.

The quality of local models is still abysmal compared to commercial SOTA models. You're not going to run something like Gemini or Claude locally. I have some "serious" hardware with 128G of VRAM and the results are still laughable. If I moved up to 512G, it still wouldn't be enough. You need serious hardware to get both quality and speed. If I can get "quality" at a couple tokens a second, it's not worth bothering.

They are getting better, but that doesn't mean they're good.


Good by what standard? Compared to SOTA today? No they're not. But they are better than the SOTA in 2020, and likely 2023.

We have a magical pseudo-thinking machine that we can run locally completely under our control, and instead the goal posts have moved to "but it's not as fast as the proprietary could".


My comparison was today's local AI to today's SOTA commercial AI. Both have improved, no argument.

It's more cost effective for someone to pay $20 to $100 month for a Claude subscription compared to buying a 512 gig Mac Studio for $10K. We won't discuss the cost of the NVidia rig.

I mess around with local AI all the time. It's a fun hobby, but the quality is still night and day.


The original pithy comment I was replying to was arguing that we’ll become dependent to a service run by another company. I don’t see that being true for two reasons:

1. You are not forced to use the AI in the first place.

2. If you want to use one, you can self host it one of the open models.

That at any moment in time the open models are not equivalent in capabilities to the SOTA paid models is beside the point.


Ok. I don’t think hosting a capable open model is seriously a realistic option for the vast majority of consumers.

Full LLM, no. Not yet.

But there’s new things like sweep [0] that you now can do locally.

And 2-3 years ago capable open models weren’t even a thing. Now we’ve made progress on that front. And I believe they’ll keep improving (both on accessibility and competency).

[0]: https://news.ycombinator.com/item?id=46713106


These takes are terrible.

1. It costs 100k in hardware to run Kimi 2.5 with a single session at decent tok p/s and its still not capable for anything serious.

2. I want whatever you're smoking if you think anyone is going to spend billions training models capable of outcompeting them are affordable to run and then open source them.


Quantize it and you can drop a zero from that price.

How much serious work can it do versus chatgpt3 (SOTA only a few years ago)?


Between the internet, or more generally computers, or even more generally electricity, are we not already?

The power companies aren't harvesting the data on your core product. Not to mention, being in roughly the same business as you.

Those things are also regulated as utilities.


Yes this is the issue. We truly have something incredible now. Something that could benefit all of humanity. Unfortunately it comes at $200/month from Sam Altman & co.

If that was the final price, no strings attached and perfect, reliable privacy then I might consider it. Maybe not for the current iteration but for what will be on offer in a year or two.

But as it stands right now, the most useful LLMs are hosted by companies that are legally obligated to hand over your data if the US gov. had decided that it wants it. It's unacceptable.


That 200/month price isn’t sustainable either. Eventually they’re going to have to jack that up substantially.

> legally obligated to hand over your data if the US gov. had decided that it wants it

Not to mention they could just sell it to the highest bidder, or simply use it to produce competition and put you out of business. Especially if you're using their service to do the development...


It's one issue, but it's not the only issue.

From the beginning the providers have been interchangeable and subject to competition. Do we have reason to believe that this will change?

prefrontal cortex as a service

yup, all these folks claiming AI is the bees knees are delegating their thinking to a roulette that may or may not give proper answers. the world will become more and more like the movie idiocracy

> I started programming over 40 years ago because it felt like computers were magic. They feel more magic today than ever before. We're literally living in the 1980s fantasy where you could talk to your computer and it had a personality. I can't believe it's actually happening, and I've never had more fun computing.

https://en.wikipedia.org/wiki/ELIZA_effect

I also can't believe it's actually happening. ;)


It has been interesting (and not in a good way) how willing people are to anthropomorphize these megacorporation-controlled machines just because the interface is natural language now.

I didn't imagine I would be sending all my source code directly to a corporation for access to an irritatingly chipper personality that is confidently incorrect the way these things are.

There have been wild technological developments but we've lost privacy and autonomy across basically all devices (excepting the people who deliberately choose to forego the most capable devices, and even then there are firmware blobs). We've got the facial recognition and tracking so many sci-fi dystopias have warned us to avoid.

I'm having an easier time accomplishing more difficult technological tasks. But I lament what we have come to. I don't think we are in the Star Trek future and I imagined doing more drugs in a Neuromancer future. It's like a Snow Crash / 1984 corporate government collab out here, it kinda sucks.


They used to call it the Personal Computer, and I think that name encompassed the "magic" I felt in the 80's.

But computing is increasingly not-for-you. Your phone will do what apple allows you to do. Your online activity is tracked and used to form a profile of your actions and behaviors. And the checks and balances - if any - are weak and compromised because of the commercial or government interest that want things that way.

the simplest example might be computer games. In the 80's it was private. In 2026 it is routinely a psychological cash register and a surveillance system.

I really like that linux with all its imperfections seems to counteract a lot of this.


Same.

I was born in 84 and have been doing software since 97

it’s never been easier, better or more accessible time to make literally anything - by far.

Also if you prefer to code by hand literally nobody is stopping you AND even that is easier.

Cause if you wanted to code for console games you literally couldn’t in the 90s without 100k specialized dev machine.

It’s not even close.

This “I’m a victim because my software engineering hobby isn’t profitable anymore” take is honestly baffling.


I'm not going to code by hand if it's 4x slower than having Claude do it. Yes, I can do that, but it just feels bad.

The analogy I like is it's like driving vs. walking. We were healthier when we walked everywhere, but it's very hard to quit driving and go back even if it's going to be better for you.


I actually like the analogy but for the opposite reason. Cars have become the most efficient way to travel for most industrial purposes. And yet enormous numbers of people still walk, run, ride bikes, or even horses, often for reasons entirely separate from financial gain.

I walk all the time

During the summer I’ll walk 30-50 miles a week

However I’m not going to walk to work ever and I’m damn sir not going to walk in the rain or snow unless if I can avoid it


Coding will take 4 times less time, but review will take almost the same amount of time if not more if the solution does not worl out of the box or has unforseen corner cases.

LLMs were trained on public code libraries and unfortunately most pf that OSS code is garbage.

There are ofcourse raisins there, but those are far in between.

Top it off with hallucinations and suddenly you spend more time debugging messy AI code when you could write the same in a fraction of that time.

The easier the task, the better job LLMs do, the harder the task the worse results you get.

Source: working with those tools daily.

Using your analogy:

- by car it will be 30km uphill, coz of how the road is built

- walking it will be 1km straight line


it's an exciting time, things are changing and changing beyond "here's my new javascript framework". It's definitely an industry shakeup kind of deal and no one knows what lies 6 months, 1 year, 5 years from now. It makes me anxious seeing as i have a wife+2 kids to care for and my income is tied to this industry but it's exciting too.

Well you need to learn to adapt quickly if you have that much infrastructure to maintain

Nothing meaningful happened in almost 20 years. After the iPhone, what happened that truly changed our lives? The dumpster fire of social media? Background Netflix TV?

In fact, I remember when I could actually shop on Amazon or browse for restaurants on Yelp while trusting the reviews. None of that is possible today.

We have been going through a decade of enshitification.


I really am very thankful for @simonw posting a TikTok from Chris Ashworth, a Baltimore theater software developer, who recently picked up LLM's for building a voxel display software controller. And who was just blown away. https://simonwillison.net/2026/Jan/30/a-programming-tool-for...

Simon doesn't touch on my favorite part of Chris's video though, which is Chris citing his friend Jesse Kriss. This stuck out at me so hard, and is so close to what you are talking about:

> The interesting thing about this is that it's not taking away something that was human and making it a robot. We've been forced to talk to computers in computer language. And this is turning that around.

I don't see (as you say) a personality. But I do see the ability to talk. The esoteria is still here underneath, but computer programmers having this lock on the thing that has eaten the world, being the only machine whisperers around, is over. That depth of knowledge is still there and not going away! But notably too, the LLM will help you wade in, help those not of the esoteric personhood of programmers to dive in & explore.


Perhaps, if you will, try to empathize with people who are not approaching the end of their careers, and are mid-career - too late to pivot to anything new, but in danger of being swept away, and you'll understand a bit more the perspective of the blog post.

Absolutely. I never thought I’d have to retrain, and I’m still uncertain if I will have to because I’m not really sure where software development will be in the next few years. It was quite an epiphany to run my first agent on a code base and be simultaneously excited at the implications for productivity, and numb at the realisation that the work it was saving was the work I enjoyed and the expertise I was being paid for. There are only so many roles for developers to write the prompts and review the output, and it does feel a bit like prodding a machine and waiting for it to go ding.

I miss the simplicity of older hardware.

The original NES controller only contains a single shift register - no other active components.

Today, a wireless thing will have more code than one would want to ever read, much less comprehend. Even a high level diagram of the hardware components involved is quite complex.

Sure, we gained convenience, but at great cost.


> We're literally living in the 1980s fantasy where you could talk to your computer and it had a personality.

We literally are not, and we’d do well to stop using such hyperbole. The 1980s fantasy was of speaking to a machine which you could trust to be correct with a high degree of confidence. No one was wishing they could talk to a wet sock that’ll confidently give you falsehoods and when confronted (even if they were right) will bow down and always respond with “you’re absolutely right”.


> We're on the precipice of something incredible.

Only if our socioeconomic model changes.


We're on the precipice of something very disgusting. A massive power imbalance where a single company or two swallows the Earth's economy, due to a lack of competition, distribution and right of access laws. The wildest part is that these greedy companies, one of them in particular, are continuously framed in a positive light. This same company that has partnered with Palantir. AI should be a public good, not something gatekept by greedy capitalists with an ego complex.

> I can't empathize with the complaint that we've "lost something" at all.

you won't feel you've lost something if you've never had it.

sorry.


I retired a few years ago, so I have no idea what AI programming is.

But I mourned when CRT came out, I had just started programming. But I quickly learned CRTs were far better,

I mourned when we moved to GUIs, I never liked the move and still do not like dealing with GUIs, but I got used to it.

Went through all kinds of programming methods, too many to remember, but those were easy to ignore and workaround. I view this new AI thing in a similar way. I expect it will blow over and a new bright shiny programming methodology will become a thing to stress over. In the long run, I doubt anything will really change.


I think you're underestimating what AI can do in the coding space. It is an extreme paradigm shift. It's not like "we wrote C, but now we switch to C++, so now we think in objects and templates". It's closer to the shift from assembly to a higher level language. Your goal is still the same. But suddenly you're working in a completely newer level of abstraction where a lot of the manual work that used to be your main concern is suddenly automated away.

If you never tried Claude Code, give it s try. It's very easy to get I to. And you'll soon see how powerful it is.


> But suddenly you're working in a completely newer level of abstraction where a lot of the manual work that used to be your main concern is suddenly automated away.

It's remarkable that people who think like this don't have the foresight to see that this technology is not a higher level of abstraction, but a replacement of human intellect. You may be working with it today, but whatever you're doing will eventually be done better by the same technology. This is just a transition period.

Assuming, of course, that the people producing these tools can actually deliver what they're selling, which is very much uncertain. It doesn't change their end goal, however. Nor the fact that working with this new "abstraction" is the most mind numbing activity a person can do.


I agree with this. At a higher level of abstraction, you’re still doing the fundamental problem solving. Low-level machine language or high-level Java, C++ or even Python, the fundamental algorithm design is still entirely done by the programmer. LLMs aren’t being used to just write the code unless the user is directing how each line or at least each function is being written, often times you can just describe the problem and it solves it most of the way if not entirely. Only for really long and complex tasks do the better models really require hand-holding and they are improving on that end rapidly.

That’s not a higher level of abstraction, it’s having someone do the work for you while doing less and less of the thinking as well. Someone might resist that urge and consistently guide the model closely but that’s probably not what the collective range of SWEs who use these models are doing and rapidly the ease of using these models and our natural reluctance to take on mental stress is likely to make sure that eventually everyone lets LLMs do most or all of the thinking for them. If things really go in that direction and spread, I foresee a collective dumbing down of the general population.


OT but I see your account was created in 2015, so I'm assuming very late in your career. Curious what brought you to HN at that time and not before?

I did not know it existed before 2015 :)

The invention of Mr Jacquard ushered in a sartorial golden age, when complex fabrics are easy to produce cheaply, at the expense of a few hours spent on punching a deck of cards. But the craft of making tapestries by hand definitely went into demise. This is the situation which the post is mourning.

Frankly, I have my doubts about the utter efficiency of LLMs writing code unattended; it will take quite some time before whatever comes after the current crop learns to do that efficiently and reliably. (Check out how many years went between first image generation demos and today's SOTA.) But the vector is obvious: humans would have to speak a higher-level language to computers, and hand-coding Typescript is going to be as niche in 10 years as today is hand-coding assembly.

This adds some kinds of fun, but also removes some other kinds of fun. There's a reason why people often pick something like PICO-8 to write games for fun, rather than something like Unreal Engine. So software development becomes harder because the developer has to work on more and more complex things, faster, and with fewer chances to study the moving parts to a comfortable depth.


I tend to feel this way (also 40-year coder).

It's because of the way that I use the tools, and I have the luxury of being a craftsman, as opposed to a "TSA agent."

But then, I don't get paid to do this stuff, anymore. In fact, I deliberately avoid putting myself into positions, where money changes hands for my craft. I know how fortunate I am, to be in this position, so I don't say it to aggravate folks that aren't.


Back in the 80s it felt like Eliza had a “personality.”

This is exactly where I am with GenAI. After forty years: blocks of code, repository patterns, factory patterns, threading issues, documentation, one page executive summaries…

I can now direct these things and it’s glorious.


> golden age of computing

I feel like we've reached the worst age of computing. Where our platforms are controlled by power hungry megacorporations and our software is over-engineered garbage.

The same company that develops our browsers and our web standards is also actively destroying the internet with AI scrapers. Hobbyists lost the internet to companies and all software got worse for it.

Our most popular desktop operating system doesn't even have an easy way to package and update software for it.


Yes, this is where it's at for me. LLM's are cool and I can see them as progress, but I really dislike that they're controlled by huge corporations and cost a significant amount of money to use.

Use local OSS models then? They aren’t as good and you need beefy hardware (either Apple silicon or nvidia GPUs). But they are totally workable, and you avoid your dislikes directly.

"Not as good and costs a lot in hardware" still sounds like I'm at a disadvantage.

$3000 is not that much for hardware (like a refurbished MBP Max with decent amount of RAM), and you'd be surprised how much more useful a thing that is slightly worse than the expensive thing is when you don't have anxiety about token usage.

$3000 might not be much to a wealthy software engineer in the US, but to, say, a college student in Portugal, it's a big expense.

Open source software democratized software in a huge way.


Ok, from that perspective we are still a few years from when a college student in Portugal can run local OSS models on their own hardware...but we aren't a few decades away from that, at least.

> they're controlled by huge corporations and cost a significant amount of money to use.

is there anything you use that isn't? like laptop on which you work, software that you use to browse the internet, read the email... I've heard similar comment like yours before and I am not sure I understand it given everything else - why does this matter for LLMs and not the phone you use etc etc?


I’ve used FreeBSD since I was 15 years old - Linux before that.

My computer was never controlled by any corporation, until now.


Yeah I've always run Linux on my computers for the past 30 years. I'm pretty used to being in control.

what phone do you use?

A desktop/laptop is fundamentally different from a phone.

except of course they are controlled by huge corporations and cost a significant amount of money to use

Yes, but they are fundamentally different use cases.

Unfortunately we live in a "vote with your wallet" paradigm where some of the most mentally unhealthy participants have wallets that are many orders of magnitude bigger than the wallet of the average participant.

> our software is over-engineered garbage

Honestly I think it's under-engineer garbage. Proper engineering is putting in the effort to come up with simpler solutions. The complex solutions appear because we push out the first thing that "works" without time to refine it.


> Where our platforms are controlled by power hungry megacorporations and our software is over-engineered garbage.

So similar to IBM in the 80s. Time for a scrappy little startup to disrupt the industry.


> So similar to IBM in the 80s

In 1980 IBM earned 3.56 billion dollars[1] (15 adjusted for inflation). Apple earned 416.16 billion in 2025, Alphabet 402.8 billion.

[1]: https://www.nytimes.com/1981/01/17/business/earnings-ibm-net...


Dystopian cyberpunk was always part of the fantasy. Yes, scale has enabled terrible things.

There are more alternatives than ever though. People are still making C64 games today, cheap chips are everywhere. Documentation is abundant... When you layer in AI, it takes away labor costs, meaning that you don't need to make economically viable things, you can make fun things.

I have at least a dozen projects going now that I would have never had time or energy for. Any itch, no matter how geeky and idiosyncratic, is getting scratched by AI.


They're possible, but they're not exactly relevant, and you couldn't do something like that on newer hardware. It's like playing a guitar from a museum because the world just forgot how to make guitars. Pretty dystopian.

It’s never been easier for you to make a competitor

So what is stopping you other than yourself?


I’m not the OP, but my answer is that there’s a big difference between building products and building businesses.

I’ve been programming since 1998 when I was in elementary school. I have the technical skills to write almost anything I want, from productivity applications to operating systems and compilers. The vast availability of free, open source software tools helps a lot, and despite this year’s RAM and SSD prices, hardware is far more capable today at comparatively lower prices than a decade ago and especially when I started programming in 1998. My desktop computer is more capable than Google’s original cluster from 1998.

However, building businesses that can compete against Big Tech is an entirely different matter. Competing against Big Tech means fighting moats, network effects, and intellectual property laws. I can build an awesome mobile app, but when it’s time for me to distribute it, I have to either deal with app stores unless I build for a niche platform.

Yes, I agree that it’s never been easier to build competing products due to the tools we have today. However, Big Tech is even bigger today than it was in the past.


Yes. I have seen the better product lose out to network effects far too many times to believe that a real mass market competitor can happen nowadays.

Look at how even the Posix ecosystem - once a vibrant cluster of a dozen different commercial and open source operating systems built around a shared open standard - has more or less collapsed into an ironclad monopoly because LXC became a killer app in every sense of the term. It’s even starting to encroach on the last standing non-POSIX operating system, Windows, which now needs the ability to run Linux in a tightly integrated virtual machine to be viable for many commercial uses.


Oracle Solaris and IBM AIX are still going. Outside of enterprises that are die hard Sun/Oracle or IBM shops, I haven't seen a job requiring either in decades. I used to work with both and don't miss them in the least.

Billions of dollars?

You don't need billions of dollars to write an app. You need billions of dollars to create an independent platform that doesn't give the incumbent a veto over your app if you're trying to compete with them. And that's the problem.

[flagged]


I'm actually extremely good at programming. My point is I love computers and computing. You can use technology to achieve amazing things (even having fun). Now I can do much more of that than when I was limited to what I can personally code. In the end, it's what computers can do that's amazing, beautiful, terrifying... That thrill and to be on the bleeding edge is always what I was after.

The downside is that whatever you (Claude) can do so can anyone else too.

So you're welcome to make the 100000000th Copy of the same thing that nobody cares about anymore.


It's so easy to build things that I don't need anyone to care about it, I just need to the computer to do what I want it to do.

[flagged]


Thank you. I don't understand how people don't see that this is the universe's most perfect gift to corporations, and what a disaster it is for labor. There won't be a middle class. Future generations will be intellectual invalids. Baffling to see people celebrating.

it is a very, very strange thing to witness

even if you can be a prompt engineer (or whatever it's called this week) today

well, with the feedback you're providing: you're training it to do that too

you are LITERALLY training the newly hired outsourced personnel to do your job

but this time you won't be able to get a job anywhere else, because your fellow class traitors are doing exactly the same thing at every other company in the world


They are the useful idiots buying into the hype thinking they by some magic they get to keep their jobs and their incomes.

This things is going to erase careers and render skills sets and knowledge cultivated over decades worthless.

Anyone can promt the same fucking shit now and call it a day.


If you were confident in your own skills, you wouldn’t need to invent a whole backstory just to discredit someone.

> I can't empathize with the complaint that we've "lost something" at all.

I agree!. One criticism I've heard is that half my colleagues don't write their own words anymore. They use ChatGPT to do it for them. Does this mean we've "lost" something? On the contrary! Those people probably would have spoken far fewer words into existence in the pre-AI era. But AI has enabled them to put pages and pages of text out into the world each week: posts and articles where there were previously none. How can anyone say that's something we've lost? That's something we've gained!

It's not only the golden era of code. It's the golden era of content.


> But AI has enabled them to put pages and pages of text out into the world each week: posts and articles where there were previously none.

Are you for real? Quantity is not equal to Quality.

I'll be sure to dump a pile of trash in your living room. There wasn't much there before, but now there is lots of stuff. Better right?


I'm finding it hard to reconcile HN's love of AI generated code with HN's dislike of AI generated content. Why is the code good but the content bad?

I think both are bad when the focus is quantity over quality. The vast majority of AI generated content is lazy and of low quality. This is just a natural consequence of making things easy. Gate keeping is actually good sometimes. Doesn’t mean there isn’t good, well curated content out there that AI was used to help make, but I think the vast sea of crap that AI enables is not worth the few gems that come with it.

I hope this is sarcasm. :)

We have more words than ever. Nice.

But all the words sound more like each other than ever. It’s not just blah, it’s blah.

And why should I bother reading what someone else “writes”? I can generate the same text myself for free.


A yes, "content". The word that perhaps best embodies the impersonal and commercialized dystopia we live in.

Quality is better than quantity.

One thing that I realized was that a lot of our so-called "craft" is converged "know-how". Take the recent news that Anthropic used Claude Code to write a C compiler for example, writing compiler is hard (and fun) for us humans because we indeed need to spend years understanding deeply the compiler theory and learning every minute detail of implementation. That kind of learning is not easily transferrable. Most students tried the compiler class and then never learned enough, only a handful few every year continued to grow into true compiler engineers. Yet to our AI models, it does not matter much. They already learned the well-established patterns of compiler writing from the excellent open-source implementations, and now they can churn out millions of code easily. If not perfect, they will get better in the future.

So, in a sense our "craft" no longer matters, but what really happens is that the repetitive know-how has become commoditized. We still need people to do creative work, but what is not clear is how many such people we will need. After all, at least in short term, most people build their career by perfecting procedural work because transferring the know-how and the underlying whys is very expensive to human. For the long term, though, I'm optimistic that engineers just get an amazing tool and will use it create more opportunities that demand more people.


I'm not sure we can draw useful conclusions from the Claude Code written C compiler yet. Yes, it can compile the Linux kernel. Will it be able to keep doing that moving forward? Can a Linux contributor reliably use this compiler to do their development, or do parts of it simply not work correctly if they weren't exercised in the kernel version it was developed against? How will it handle adding new functionality? Is it going to become more-and-more expensive to get new features working, because the code isn't well-factored?

To me this doesn't feel that many steps above using a genetic algorithm to generate a compiler that can compile the kernel.

If we think back to pre-AI programming times, did anyone really want this as a solution to programming problems? Maybe I'm alone in this, but I always thought the problem was figuring out how to structure programs in such a way that humans can understand and reason about them, so we can have a certain level of confidence in their correctness. This is super important for long-lived programs, where we need to keep making changes. And no, tests are not sufficient for that.

Of course, all programs have bugs, but there's a qualitative difference between a program designed to be understood, and a program that is effectively a black box that was generated by an LLM.

There's no reason to think that at some point, computers won't be able to do this well, but at the very least the current crop of LLMs don't seem to be there.

> and now they can churn out millions of code easily.

It's funny how we suddenly shifted from making fun of managers who think programmer's should be measured by the number of lines of code they generated, to praising LLMs for the same thing. Why did this happen? Because just like managers, programmers letting LLMs write the code aren't reading and don't understand the output, and therefore the only real measure they have for "productivity" is lines of code generated.

Note that I'm not suggesting that using AI as a tool to aid in software development is a bad thing. I just don't think letting a machine write the software for us is going to be a net win.


writing a C compiler is a 1st year undergrad project

C was explicitly designed to make it simple to write a compiler


Which university offers compiler for freshmen? Can you provide a link to the course?

These are toy compilers missing many edge cases. You’ll be lucky if they support anything other than integer types, nevermind complex pointer-to-pointer-to-struct-with-pointers type definitions. They certainly won’t support GNU extensions. They won’t compile any serious open source project, nevermind the Linux kernel.

Third or fourth, maybe, not first.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: