This is no different then carpentry. Yes, all furniture can now be built by machines. Some people still choose to build it by hand. Does that make them less productive? Yes. Will they ever carve furniture by hand for a business? Probably not. Can they still enjoy the act of working with the wood? Yes.
If you want to code by hand, then do it! No one's stopping you. But we shouldn't pretend that you will be able to do that professionally for much longer.
I’ve heard this metaphor before and I don’t think it works well.
For one, a power tool like a bandsaw is a centaur technology. I, the human, am the top half of the centaur. The tool drives around doing what I tell it to do and helping me to do the task faster (or at all in some cases).
A GenAI tool is a reverse-centaur technology. The algorithm does almost all of the work. I’m the bottom half of the centaur helping the machine drive around and deliver the code to production faster.
So while I may choose to use hand tools in carpentry, I don’t feel bad using power tools. I don’t feel like the boss is hot to replace me with power tools. Or to lay off half my team because we have power tools now.
There's DeWALT, Craftsman, Stanley, etc carpentry/mechanic power tool brands who make a wide variety of all manner of tools and tooling; the equivalents in computers (at least UNIXy) are coreutils (fileutils, shellutils, and textutils), netpbm, sed, awk, the contents of /usr/bin, and all their alternative, updated brands like fd, the silver searcher, and ripgrep are, or the progression of increased sharpening in revision control tools from rcs, sccs, svn, to mercurial and git; or telnet-ssh, rcp-rsync, netcat-socat. Even perl and python qualify as multi-tool versions of separate power tools. I'd even include language compilers and interpreters in general as extremely sharp and powerful power multi-tools, the machine shop that lets you create more power tools. When you use these, you're working with your hands.
GenAI is none of that, it's not a power tool, even though it can use power tools or generate output like the above power tools do. GenAI is hiring someone else to build a bird house or a spice rack, and then saying you had a hand in the results. It's asking the replicator for "tea, earl grey, hot". It's like how we elevate CEOs just because they're the face of the company, as if they actually did the work and were solely responsible for the output. There's skill in organization and direction, not all CEOs get undeserved recognition, but it's the rare CEO who's getting their hands dirty creating something or some process, power tools or not. GenAI lets you, everyone, be the CEO.
Why else do you think I go to work everyday? Because I have a “passion” for sitting at a computer for 40 hours a week to enrich private companies bottom line or a SaaS product or a LOB implementation? It’s not astroturfing - it’s realistic
Would you be happier if I said I love writing assembly language code by hand like I did in 1986?
My analogy is more akin to using Google Maps (or any other navigation tool).
Prior to GPS and a navigation device, you would either print out the route ahead of time, and even then, you would stop at places and ask people about directions.
Post Google Maps, you follow it, and then if you know there's a better route, you choose to take a different path and Google Maps will adjust the route accordingly.
Google Maps is still insanely bad for hiking and cycling, so I combine the old-fashioned map method with an outdoor GPS onto which I load a precomputed GPX track for the route that I want to take.
I think this argument would work if hand-written code would convey some kind of status, like an expensive pair of Japanese selvage jeans. For now though, it doesn't seem to me that people paying for software care if it was written by a human or an AI tool.
You only feel that way about power tools because the transition for carpentry happened long ago. Carpenters viewed power tools much as we do LLMs today. Furniture factories, equivalent of dark agentic code factories, caused much despair to them too.
Humans are involved with assembly only because the last bits are maniacally difficult to get right. Humans might be involved with software still for many years, but it probably will look like doing final assembly and QA of pre-assembled components.
Maybe. I'm not sure its that different though? If one person can do the work of two because of power tools, then why keep both? Same with AI. How people feel about it doesn't seem relevant.
Maybe the right example is the role of tractors in agriculture. Prior to tractors you had lots of people do the work, or maybe animals. But tractors and engines eliminate a whole class of labor. You could still till a field by hand or with a horse if you want, but it's probably not commercially viable.
First, creating power tools didn’t cause mass layoffs of carpenters and construction workers. There continued to be a demand for skilled workers.
Second, power tools work with the user’s intent. The user does the planning, the measuring, the cutting and all the activities of building. They might choose to use a dovetail saw instead of fasteners to make a joint.
Third, programming languages are specifications given to a compiler to generate more code. A single programmer can scale to many more customers than a labourer using tools.
The classification of centaur vs reverse-centaur tools came to me by way of Corey Doctorow.
There might be ways to use the technology that doesn’t make us into reverse centaurs but we haven’t discovered that yet. What we have in its current form isn’t a tool.
Did power tools not cause layoffs? That seems like a dubious claim to me. Building a house today takes far fewer people than 100 years ago. Seems unlikely that all the extra labor found other things to do in construction.
To me, they all the same because they are all tools that stand between “my vision” and “it being built.”
e.g. when I built a truck camper, maybe 50% was woodworking but I had to do electrical, plumbing, metalworking, plastic printing, and even networking infra.
The satisfaction was not from using power tools (or hand tools too) — those were chores — it was that I designed the entire thing from scratch by myself, it worked, was reliable through the years, and it looked professional.
The “work” is not creating for and while loops. The work for me is:
1. Looking at the contract and talking to sales about any nuances from the client
2. Talking to the client (use stakeholder if you are working for a product company) about their business requirements and their constraints
3. Designing the architecture.
4. Presenting the architecture and design and iterating
5. Doing the implementation and iterating. This was the job of myself and a team depending on the size of the project. I can do a lot more by myself now in 40 hours a week with an LLM.
6. Reviewing the implementation
7. User acceptance testing
8. Documentation and handover.
I’ve done some form of this from the day I started working 25 years ago. I was fortunate to never be a “junior developer”. I came into my first job with 10 years of hobbyist experience and implementing a multi user data entry system.
I always considered coding as a necessary evil to see my vision come to fruition.
It seems like you're doing a lot of work to miss the actual point. Focusing on the minutiae of the analogy is a distraction from the over arching and obvious point. It has nothing to do with how you feel, it has to do with how you will compete in a world with others who feel differently.
There were carpenters who refused to use power tools, some still do. They are probably happy -- and that's great, all the power to them. But they're statistically irrelevant, just as artisanal hand-crafted computer coding will be. There was a time when coders rejected high level languages, because the only way they felt good about their code is if they handcrafted the binary codes, and keyed them directly into the computer without an assembler. Times change.
In my opinion, it is far too early to claim that developers developing like it was maybe three years ago are statistically irrelevant. Microsoft has gone in on AI tooling in a big way and they just nominated a "software quality czar".
I used the future tense. Maybe it will be one hundred years from now, who knows; but the main point still stands. It would just be nice to move the conversation beyond "but I enjoy coding!".
I don’t think it’s correct to claim that AI generated code is just next level of abstraction.
All previously mentioned levels produce deterministic results. Same input, same output.
AI-generation is not deterministic. It’s not even predictable. And example of big software companies clearly show what mass adoption of AI tools will look like in terms of software quality. I dread if using AI will ever be an expectation, this will be level of enshittification never before imagined.
You're not wrong. But your same objection was made against compilers. That they are opaque, have differences from one to another, and can introduce bugs, they're not actually deterministic if you upgrade the compiler, etc. They separate the programmer from the code the computer eventually executes.
In any case, clinging to the fact that this technology is different in some ways, continues to ignore the many ways it's exactly the same. People continue to cling to what they know, and find ways to argue against what's new. But the writing is plainly on the wall, regardless of how much we struggle to emotionally separate ourselves from it.
They may not be wrong per se but that argument is essentially a strawman argument.
If these tools are non-deterministic then how did someone at Anthropic spend the equivalent of $20,000 of Anthropic compute and end up with a C compiler that can compile the Linux kernel (one of the largest bodies of C code out there).
To be frank, the C compilers source code were probably multiply times in its learning material, it just had to translate to Rust.
This aside, one success story doesn’t mean much, doesn’t even touch determinism question. Anthropic with every ad like this should have posted all the prompts they used.
People on here keep trotting out this "AI-generation is not deterministic." (more properly speaking, non-deterministic) argument on here …
And my retort to you (and them) is, "Oh yeah, and so?"
What about me asking Claude Code to generate a factorial function in C or Python or Rust or insert-your-language-of-choice-here is non-deterministic?
If you're referring to the fact that for a given input LLMs (or whatever) because of certain controls (temperature controls?) don't give the same outputs for the same inputs. Yeah, okay. If we're talking about conversational language that makes a meaningful difference to whether it sounds like an ELISA robots or more like a human. But ask an LLM to output some code then that code has to adhere to functional requirements independent of, muh, non-determinism. And what's to stop you (if you're so sceptical/scared) writing test-cases to make sure the code that is magically whisked out of nowhere performs as you so desire? Nothing. What's to stop you getting one agent to write the test-suite (and for you to review to the test-suite for correctness and for another agent to the write the code and self-correct based off of checking its code against the test-suite? Nothing
I would advise anyone encountering this but-they're-non-deterministic argument on HN to really think through what the proponents of this argument are implying. I mean, aren't humans non-deterministic. (I should have thought so.) So how is it, <extra sarcasm mode activated>pray tell</extra sarcasm mode activated> humans manage to write correct software in the first place?
I personally have jested many times I picked my career because the logical soundness of programming is comforting to me. A one is always a one; you don’t measure it and find it off by some error; you can’t measure it a second time and get a different value.
I’ve also said code is prose for me.
I am not some autistic programmer either, even if these statements out of context make me sound like one.
The non-determinism has nothing to do with temperature; it has everything to do with that fact that even at temp equal to zero, a single meaningless change can produce a different result. It has to do with there being no way to predict what will happen when you run the model on your prompt.
Coding with LLMs is not the same job. How could it be the same to write a mathematical proof compared to asking an LLM to generate that proof for you? These are different tasks that use different parts of the brain.
> A one is always a one; you don’t measure it and find it off by some error; you can’t measure it a second time and get a different value.
Linus Torvalds famously only uses ECC memory in his dev machines. Why? Because every now and again either a cosmic ray or some electronic glitch will flip a bit from a zero to a one or from a one to a zero in his RAM. So no, a one is not always a one. A zero is not always a zero. In fact, you can measure it and find it off by some error. You can measure it a second time and get a different value. And because of this ever-so-slight glitchiness we invented ECC memory. Error correction codes are a thing because of this fundamental glitchiness. https://en.wikipedia.org/wiki/ECC_memory
We understand when and how things can go wrong and we correct for that. Same goes for LLMs. In fact I would go so far as to say that someone doesn't even really think like how a software/hardware engineer ought to think if this is not nearly immediately obvious.
Besides the but-they're-not-deterministic crowd there's also the oh-you-find-coding-painful-do-you crowd. Both are engaging in this sort of real men write code with their bare hands nonsense -- if that were the case then why aren't we still flipping bits using toggle switches? We automate stuff, do we not? How is this not a step-change in automation? For the first time in my life my ideas aren't constrained by how much code I can manually crank out and it's liberating. It's not like when I ask my coding agent to provide me with a factorial function in Haskell it draws a tomato. It will, statistically speaking, give me a factorial function in Haskell. Even if I have never written a line of Haskell in my life. That's astounding. I can now write in Haskell if I want. Or Rust. Or you-name-it.
Aren't there projects you wanted to embark on but the sheer amount of time you'd need just to crank out the code prevented you from even taking the first step? Now you can! Do you ever go back to a project and spend hours re-familiarising yourself with your own code. Now it's a two minute "what was I doing here?" away from you.
> The non-determinism has nothing to do with temperature; it has everything to do with that fact that even at temp equal to zero, a single meaningless change can produce a different result. It has to do with there being no way to predict what will happen when you run the model on your prompt.
I never meant to imply that the only factor involved was temperature. For our purposes this is a pedantic correction.
> Coding with LLMs is not the same job. How could it be the same to write a mathematical proof compared to asking an LLM to generate that proof for you?
Correct, it's not the same. Nobody is arguing that it's the same. And it's wrong that it's different, it's just different that it's different.
> These are different tasks that use different parts of the brain.
> That's astounding. I can now write in Haskell if I want. Or Rust. Or you-name-it.
You're responsible for what you ship using it. If you don't know what you're reading, especially if it's a language like C or Rust, be careful shipping that code to production. Your work colleague might get annoyed with you if you ask them to review too many PRs with the subtle, hard-to-detect kind of errors that LLMs generate. They will probably get mad if you submit useless security reports like the ones that flood bug bounty boards. Be wary.
IMO the only way to avoid these problems is expertise and that comes from experience and learning. There's only one way to do that and there's no royal road or shortcut.
You’re making quite long and angry sounding comments.
If you’re making code in language you don’t know, then this code is as good as a magical black box. It will never be properly supported, it’s a dead code in the project that may do what it says it does or may not (a 100%).
I think you should refrain from replying to me until you're able to respond to the actual points of my counter-arguments to you -- and until you are able to do so I'm going to operate under the assumption that you have no valid or useful response.
Non-determinism means here that with same inputs, same prompts we are not guaranteed the same results.
This turns writing code this way into a tedious procedure that may not even work exactly the same way every time.
You should ask yourself, too: if you already have to spend so much time to prepare various tests (can’t trust LLM to make them, or have to describe it so many details), so much time describing what you need, then hand holding the model, all to get mediocre code that you may not be able to reproduce with the same model tomorrow - what’s the point?
I don't think he's missing the point at all. A band saw is an immutable object with a fixed, deterministic capability--in other words, a tool.
An LLM is a slot machine. You can pull keep pulling the lever, but you'll get different results every time. A slot machine is technically a machine that can produce money, but nobody would ever say it's a tool for producing money.
People keep trotting this argument out. But a band saw is not deterministic either, it can snap in the middle of a cut and destroy what you're working on. The point is, we only treat it like it's deterministic, because most of the time it's reliable enough that it just does what we want. AI technology will definitely get to the same level eventually. Clinging on to the fact that it isn't yet at that level today, is just cope, not a principled argument.
For every real valued function and every epsilon greater than zero, there’s a neural network (size unbounded) which approximates the function with precision epsilon.
It sounds impressive and, as I understand it, is the basis for the argument that algorithms based on NN’s such as LLM’s will be able to put perform humans at tasks such as programming.
But this theorem contains an ambiguous term that makes it less impressive when you remove it.
Which for me, makes such tools… interesting I guess for some applications but it’s not nearly as impressive as to remove the need for programmers or to replace their labour entirely with automation that we need to concern ourselves with writing markdown files and wasting tokens asking the algorithm to try again.
So this whole argument that, “you better learn to use them or be displaced in the labour market,” is a relying on a weak argument.
I think the distinction without a difference is a tool being deterministic or not. Fundamentally, its nature doesn't matter, if in actual practice it outperforms everything else.
Be that as it may, moving the goalpost aside. For me personally this fundamentally does matter. Programming is about giving instructions for a machine (or something mechanical) to follow. It matters a great deal to me that the machine reliably follows the instructions I give it. And compiler authors of the past have gone to great lengths to make their compilers produce robust (meaning deterministic) output, as have language authors tried to make their standards as rigorous (meaning minimize undefined behavior) as possible.
And for that matter, going back to the band saw analogy, a measure of a quality of a great band saw is, in fact, that the blade won’t snap in half in the middle of a cut. If a band saw manufacturer produces a band saw with a really low binomial p-value (meaning it is less deterministic/more stochastic) that is a pretty lousy band saw, and good carpenters will know to stay away from that brand of band saws.
To me this paints a picture of a distinction that does indeed have a difference. A pretty important difference for that matter.
I feel like we're both in similar minds of opposite sides, so perhaps you can answer me this: How is a deterministic AI any different from a search engine?
In other words, if you and me always get the same results back for the same prompt (definition of determinism,) isn't that just really, really power hungry Google?
I'm not sure pure determinism is actually a desirable goal. I mean, if you ask the best programmer in the world the same question every day, you're likely to eventually get a new answer at some point. But if you ask him, or I ask him, hopefully he gives the same good answer, to us both. In any case, he's not just a power hungry Google, because he can contextualize our question, and understand us when we ask in very obscured ways; maybe without us even understanding what we're actually looking for.
Have you never run a team of software engineers as a lead? Agentic coding comes naturally to a lot of people because that's PRECISELY what you do when you're leading a team, herding multiple brains to point them in the same direction so when you combine all their work it becomes something that is greater than the sum of it's parts.
Lots of the complains about agents sound identical to things I've heard and even said myself about junior engineers.
That said, there's always going to need to be people who can reach below the abstraction and agentic coding loops deprive you of the ability to get those reps in.
People say this about juniors but I've never seen a junior make some of the bone headed mistakes AI loves to make. Either I'm very lucky or other people have really stupid juniors on their teams lol.
Regardless, personally, there's no comparison between an LLM and a junior; always rather work with a junior.
I've wrote this a few times, but LLM interactions often remind me of my days at Nokia - a lot of the interactions are exactly like what I remember with some of their cheap subcons there.
I even have exactly the same discussion after it messed up, like "My code is working, ignore that failing test, that was always broking, and I definitey didn't break it just now".
> Have you never run a team of software engineers as a lead?
I expect juniors to improve fast to get really good. AI is incapable of applying the teaching that I expcect juniors to internalize to any future code that it writes.
Yes, I’ve read quite a lot about that bloody and terrible part of history.
The Ludddites were workers who lived in an era without any social or state protections for labourers. Capitalists were using child labour to operate the looms because it was cheaper than paying anyone a fair wage. If you didn’t like the conditions you could go work as an indentured servant for the state in the work houses.
Luddites used organized protests in the form of collective violence to force action when they had no other leverage. People were literally shot or jailed for this.
It was a horrible part of history written by the winners. That’s why everyone thinks Luddites were against technology and progress instead of social reforms and responsibility.
In that case I really don't understand how you conclude there's any difference between being on the bottom or the top of the tool. The bare reality is the same: Skilled labourers will be replaced by automation. Woodworking tools (and looms) replaced skilled labourers with less-skilled replacements (such as children), and AI will absolutely replace skilled labourers with less-skilled replacements as well. I ask sincerely, I truly don't understand how this isn't a distinction without a difference. Have you spent time inside a modern furniture factory? Have you seen how few people it takes to make tens of tons of product?
I haven’t worked in a furniture factory but I have assembled car seats in a factory for Toyota.
The difference matters because the people who worked together to smash the looms created the myth of Ned Ludd to protect their identities from persecution. They used organized violence because they had no leverage otherwise to demand fair wages, safety guarantees, and other labour protections. What they were fighting for wasn’t the abolishment of automation and looms. It was for social reforms that would have given them labour protections.
It matters today because AI isn’t a profit line on any balance sheet right now but it is being used to justify mass layoffs and to reduce the leverage of knowledge workers in the marketplace. These tools steal your work without compensation and replace your job with capital so that rent seekers can seek rent.
It’s not a repeat of what happened in the Luddite protests but history is rhyming.
We agree, which makes me question your original point with the power tool somehow being different even more. Every automation gives more leverage to capital over labour. That's the history of technology. Downstream it makes this great life with indoor plumbing etc possible but automation in any form will always erode skilled labourers as a class. It's all essentially the same in that regard.
The introduction of looms wasn’t what displaced workers.
It was capitalists seeking profits by reducing the power of labour to negotiate.
We didn’t mass layoff carpenters once we had power tools and automation.
We had more carpenters.
Just like we had more programmers once we invented compilers and higher level languages.
LLMs just aren’t like power tools. Most programming tools aren’t like power tools.
Programming languages might be close to being “power tools,” as they fit in the “centaur” category. I could write the assembly by hand or write the bash scripts that deploy my VMs in the cloud. But instead I can write a program, give it to a compiler, and it will generate the code for me.
LLM generated code fits in the reverse-centaur category. I’m giving it instructions and context but I’m not doing the work. It is. My labour is to feed the machine and deliver its output. If there was a way to remove me from that loop, you bet I’d be out of a job in a heartbeat.
> I don’t feel like the boss is hot to replace me with power tools. Or to lay off half my team because we have power tools now.
That has more to do with how much demand there is for what you're doing. With software eating the world and hardware constraints becoming even more visible due to the chips situation, we can expect that there will be plenty of work for SWE's who are able to drive their coding agents effectively. Being the "top" (reasoning) or the "bottom" half is a matter of choice - if you slack off and are not highly committed to delivering quality product, you end up doing the "bottom" part and leaving the robot in the driver's seat.
I think this comparison isn’t quite correct. The downside with carpentry is that you only ever produce one of the thing you’re making. Factory woodwork can churn out multiple copies of the same thing in a way hand carpentry never can. There is a hard limit on output and output has a direct relationship to how much you sell.
Code isn’t really like that. Hand written code scales just like AI written code does. While some projects are limited by how fast code can be written it’s much more often things like gathering requirements that limits progress. And software is rarely a repeated, one and done thing. You iterate on the existing product. That never happens with furniture.
There could be factories manufacturing your own design, just one piece. It won't be economical, but can be done. But parts are still the same - chunks and boards of wood joined together by the same few methods. Maybe some other materials thrown into the mix.
With software it is similar: Different products use (mostly) the same building blocks, functions, libraries, drivers, frameworks, design patterns, ux patterns.
> If you want to code by hand, then do it! No one's stopping you. But we shouldn't pretend that you will be able to do that professionally for much longer.
If you can't code by hand professionally anymore, what are you being paid to do? Bring the specs to the LLMs? Deal with the customers so the LLMs don't have to?
This is what I don’t understand: why highly-paid SWEs seem to think that their salaries will remain the same (if they even still have a job) if their role is now a glorified project manager.
Recently, I had to do an integration with a Chinese API for my company. I used Codex to do the whole thing.
Yet, there is no way a product manager without any coding experience could have done it. First, the API needed to communicate to the main app correctly such as formatting, correcting data. This required human engineer guidance and experience working with expected data. AI was lost. Second, the API was designed extremely poorly. You first had to make a request, then retry a second endpoint over and over again while the Chinese API did its thing in the background. Yes, I had to poll it. I then had to do load testing to make sure it was reliable (it wasn't). In the end, I gave a recommendation that we shouldn't rely on this Chinese company and back out of the deal before we send them a huge deposit.
A non-technical PM couldn't have done what I did... for at least a few more years. You need a background and experience in software development to even know what to prompt the AI. Not only that, in the last 3 years, I developed an intuition on where LLMs fail and succeed when writing code.
I still have a job. My role has changed. I haven't written more than 10 lines of code in a day for months now. Yes, it's kind of scary for software devs right now but I'm honestly loving this as I was never the kind of dev who loved the code, just someone who needed to code to get what I wanted.
Architects and engineers are not construction workers. AI can build the thing but it needs to be told exactly what to build by someone who knows how software works.
I’ve spent enough time working with cross-functional stakeholders to know that the vast majority of PM (whether of the product, program, or project variety), will not be capable of running AI towards any meaningful software development goal. At best they can build impressive prototypes and demos, at worst they will corrupt data in a company-destroying level of failure.
Basically you feed it a massive volume of application code. It turns out there is a lot of commonality and latent repetition that can be teased out by LLMs, so you can get quite far with that, though it will fall down when you get into more novel terrain.
> AI can build the thing but it needs to be told exactly what to build by someone who knows how software works.
If AI was following my instructions instead of ignoring them, and after complaining telling me it is sorry, and returns some other implementation which also fails to follow my instructions ... :-(
Don't be stupid, if an AI can figure out how to arrange code, it can also figure out how to pick the right architecture choices.
Right now millions of developers are providing tons of architecture questions and answers. That's all going to be used as training data for the next model coming out in 6 months time.
This is a moat on our jobs as deep as a puddle.
If you believe LLMs will be able to do complex coding tasks, you must also concede they will be able to make the relatively simpler architecture choices easily simply by asking the right questions. Something they're already starting to be able to do.
It's not a massive jump to go from, 'add a button above the table to the right that when clicked downloads and excel file', to 'The client's asking to dowbload an excel file".
If you believe the LLMs will graduate from junior level coding to senior in the next year, which they're clearly not capable of doing yet despite all the hype, there is no moat of going from coder to BA to PM.
But (the thinking) goes, with AI in the mix, spinning up a new project or feature will be so low-friction that there will be 10x as many projects created. So our jobs are saved!
You have to move up the stack and make yourself a more valuable product. I have an analogy…
I’ve been working for cloud consulting companies/departments for six years.
Customers were willing to pay mid level (L5) consultants with @amazon.com by their names (AWS ProServe) $x to do one “workstream”/epic worth of work. I got paid $x - Amazon’s cut in cash and RSUs.
Once I got Amazon’ed, I had to get a staff level position (senior equivalent at BigTech) at a third party company where now I am responsible for larger projects. Before I would have needed people - now I need code gen tools and my quarter century of development experience and my decade of experience leading implementations + coding.
Doesn't this mean the ones that should be really worried are the project managers, since the SWE has better understanding over what's being done and can now orchestrate from a PM level?
Both should realize that if this all works out according to plan then there eventually reaches a point that there is no longer a need for their entire company, let alone any individual role in it.
They're delusional, but that's to be expected if you imagine them as the types for whom everything in life has always just kinda worked out. The idea that things could suddenly not work out is almost unimaginable to them, so of course things will change, but not, for them, substantially for the worse.
You are under delusion. Glorified project manager will not produce production quality code no matter what. At least not until we will have reached that holy grail of AGI. But if that ever happens the world will have way bigger problems to deal with.
I don’t think that the real dichotomy here. You can either produce 2-5x good maintainable code, or 10-50x more dogshit code that works 80-90% of the time, and that will be a maintenance nightmare.
The management has decided that the latter is preferable for short term gains.
> You can either produce 2-5x good maintainable code, or 10-50x more dogshit code that works 80-90% of the time, and that will be a maintenance nightmare.
It's actually worse than that, because really the first case is "produce 1x good code". The hard part was never typing the code, it was understanding and making sure the code works. And with LLMs as unreliable as they are, you have to carefully review every line they produce - at which point you didn't save any time over doing it yourself.
Look at the pretty pictures AI generates. That's where we are with code now. Except you have ComfyUI instead of ChatGPT. You can work with precision.
I'm a 500k TC senior SWE. I write six nines, active-active, billion dollar a day systems. I'm no stranger to writing thirty page design documents. These systems can work in my domain just fine.
> Look at the pretty pictures AI generates. That's where we are with code now.
Oh, that is a great analogy. Yes, those pictures are pretty! Until you look closer. Any experienced artist or designer will tell you that they are dogshit and don't have value. Don't look further than at Ubisoft and their Anno 117 game for a proof.
Yep, that's where we are with code now. Pretty - until you look close. Dogshit - if you care to notice details.
Not to mention how hard it is to actually get what you want out of it. The image might be pretty, and kinda sorta what you asked for. But if you need something specific, trying to get AI to generate it is like pulling teeth.
Since we’re apparently measuring capability and knowledge via comp, I made 617k last year. With that silly anecdote out of the way, in my very recent experience (last week), SOTA AI is incapable of writing shell scripts that don’t have glaring errors, and also struggles mightily with RDBMS index design.
Can they produce working code? Of course. Will you need to review it with much more scrutiny to catch errors? Also yes, which makes me question the supposed productivity boost.
The problem is not that it can’t produce good code if you’re steering. The problem is that:
There are multiple people on each team, you can not know how closely each teammate monitored their AI.
Somebody who does not car will vastly outperform your output. By orders of magnitude. With the current unicorn chasing trends, that approach tends to be more rewarded.
This produces an incentive to not actually care about the quality. Which will cause issues down the road.
I quite like using AI. I do monitor what it’s doing when I’m building something that should work for a long time. I also do total blind vibe coded scripts when they will never see production.
But for large programs that will require maintenance for years, these things can be dangerous.
> You can write 10x the code - good code. You can review and edit it before committing it. Nothing changes from a code quality perspective. Only speed.
I agree, but this is an oversimplification - we don't always get the speed boosts, specifically when we don't stay pragmatic about the process.
I have a small set of steps that I follow to really boost my productivity and get the speed advantage.
(Note: I am talking about AI-coding and not Vibe-coding)
- You give all the specs, and there are "some" chances that LLM will generate code exactly required.
- In most cases, you will need to do >2 design iterations and many small iterations, like instructing LLMs to properly handle error gracefully recover from errors.
- This will definitely increase speed 2x-3x, but we still need to review everything.
- Also, this doesn't take into account the edge cases our design missed. I don't know about big tech, but when I have to do the following to solve a problem
1. Figure out a potential solution
2. Make a hacky POC script to verify the proposed solution actually solves the problem
3. Design a decently robust system as a first iteration (that can have bugs)
4. Implement using AI
5. Verify each generated line
6. Find out edge cases and failure modes missed during design and repeat from step3 to tweak the design, or repeat from step4 to fix bug.
WHENEVER I jump directly from 1 -> 3 (vague design) -> 5, Speed advantages become obsolete.
PMs can always keep their jobs because they appear to be working and they keep contact with the execs directly. They have taken a bigger and bigger part of the tech pie over the years and soon they finally take it all.
That's not what i am seeing being played out at a big corp. In reality everyone gets thrown under the bus, no matter if c-level or pleb if they don't appear to know how to drive the ai metrics up. Just being a PM won't save your job any more than that of the dev who doesn't know how to acquire and use new skills. On the contrary, jobs of the more competent devs are safer than those of some managers here who don't know the tech.
>"If you can't code by hand professionally anymore"
Then you are simply fucked. The code you deliver will contain bugs which LLM sometimes will be able to fix and sometimes will be not. And as a person who has no clue you will have no idea how to fix it when LLM can not. Also even when LLM code is correct it can and sometimes does introduce gross performance fuckups, like using patterns that employ N-square complexity instead of N for example. Again as a clueless person you are fucked. And if one goes to areas like concurrency, multithreading optimizations one gets fucked even more. I can go on and on on way more particular reasons to get screwed.
For a person who can hand code AI becomes amazing tool. For me - it helps immensely.
I am currently doing 6 projects at the same time, where before I would only of doing one at a time. This includes the requirements, design, implementation and testing.
Your code in $INSERT_LANGUAGE is no less of a spec to machine code than english is to $INSERT_LANGUAGE.
Spec is still needed, spec is the core problem of engineering. Too much specialization have made job titles like $INSERT_LANGUAGE engineer, which deviated too far from the core problem, and it is being rectified now.
When the cost of defects and of the AI tooling itself inevitably rises, I think we are likely to see a sudden demand for the remaining employed developers to do more work "by hand".
I'm still not sure about the productivity. Last time I asked a LLM to generate a lib for me it did it in a few second but the result took me the day to review and correct. About the same time it would take me to write it from scratch.
That is exactly my experience. Every single time I get an LLM to write some code for me, it saves me no time because I have to review it carefully to make sure there are no mistakes. LLMs still, even after work has been done, completely make up methods and syntax that doesn't exist. They still get logic wrong or miss requirements.
Right now the only way to save time with LLMs is to trust the output and not review it. But if you do that, you're just going to produce crappy software.
- documentation for well-known frameworks and libs, "how do I do [x] in [z]?" questions
- port small code chunks from one language to another
- port configuration from one software to another (example: I got this Apache config, make me the equivalent in NGinX)
Which is already pretty cool if you don't think about the massive amount of energy spent for this, but definitely not the "10x" productivity boost I hear about.
Pretty much exactly this for me, except i can coax it into writing decent unit tests (really gotta be diligent though, it loves mocking out the things it's testing lol) and for CI stuff (mostly because I despise Actions YAML and rather let it do it). But i do get decent results in both areas on a regular basis.
I think you're supposed to ask another LLM instance to review it, then ask the first LLM instance to implement corrections, or that's how I understand it.
That is not a technical constraint and may be automated if it made sense financially.
Same with software - for some time software won't be all designed, coded, tested, deployed to production without human supervision or approval. But the pieces in between are more and more filled by AI, as are the logistics of designing, manufacturing and distributing sofas.
The reason this analogy falls down is that tools typically do one thing, do it extremely well, are extremely reliable. When I use a table saw, I know that it's going to cut this board into two pieces, exactly in this spot, and it'll do that exactly the same way every single time I use it.
You cannot tell AI to do just one thing, have it do it extremely well, or do it reliably.
And while there's a lot of opinions wrapped up in it all, it is very debatable whether AI is even solving a problem that exists. Was coding ever really the bottleneck?
And while the hype is huge and adoption is skyrocketing, there hasn't been a shred of evidence that it actually is increasing productivity or quality. In fact, in study after study, they continue to show that speed and quality actually go down with AI.
Some people like to spin their own wool, weave their own cloth, sew their own clothes.
A few even make a good living by selling their artisanal creations.
Good for them!
It's great when people can earn a living doing what they love.
But wool spinning and cloth weaving are automated and apparel is mass produced.
There will always be some skilled artisans who do it by hand, but the vast majority of decent jobs in textile production are in design, managing machines and factories, sales and distribution.
The metaphor doesn't work because all of the things mentioned have to be individually fabricated. But software doesn't. Copies are free. Thats the magic of software, you don't need much of it - you just need to be correct/smarter.
It's pretty surprising to see people on this site (assume mostly programmers) to think of code in terms of quantity. I always thought developers believe in less code the better.
You're unstated assumption is that machine-written code is lower quality than human-written code. That may be true for the top 5% of developers today.
But I didn't think that assumption is true for the median developer today, and it probably won't be true for the 5-percentile developer by this time next year.
Like, do you even know how furniture is designed and built? Do you know how software is designed and built? Where is this comment even coming from? And people are agreeing with this?
A friend of mine reposted someone saying that "AI will soon be improving itself with no human intervention!!" And I tried asking my friend if he could imagine how an LLM could design and manufacture a chip, and then a computer to use that chip, and then a data center to house thousands of those computers, and he had no response.
People have no perspective but are making bold assertion after bold assertion
If this doesn't signal a bubble I don't know what does
I'm tired of the carpentry analogy. It feels like a thought stopping cliche, because it's used in every thread where this topic comes up. It misses the fact that coding is fundamentally different, and that there are still distinct advantages to writing at least some code by hand, both for the individual and the company.
The question nobody asks, is what will happen once atrophy kicks in and nobody is able fire fight production genAI isn't able to fix without making things worse, with broke system bleeding a million dollars per day or more.
It's at least possible that we would eventually do a rollback to status quo and swear to never devalue human knowledge of the problems we solve.
> swear to never devalue human knowledge of the problems we solve.
Love this way of putting it. I hate that we can mostly agree that devaluing expertise of artists or musicians is bad, but that devaluing the experience of software engineers is perfectly fine, and actually preferable. Doing so will have negative downstream effects.
To me the biggest difference is that there’s some place for high quality, beautiful and expensive handcrafted woodwork, even if it’s niche in a world where Ikea exists. Nobody will ever care whether some software was written by humans or a machine, as long as it works and works well.
^This. Even if there was a demand for hand-crafted software, it would be very hard to prove it was hand-crafted, but it's unlikely there could be a demand for the same reasons as there is no market for e.g. luxury software. As opposed to physical goods, software consumers care for the result, not how it was created.
Maybe a better question is: Is natural language to code what high-level programming is to hand-written assembly? Brooks claims the "essential complexity" lies in the specification: if a spec is precise enough to be executable, it’s just code by another name. But is the gap actually that large today? When I ask for a "centered 3x3 Tailwind grid", the patterns are so standardized that the ambiguity nearly vanishes. It’s like asking for a Java 8 main method. The implementation is so predictable that the intent and the code are one and the same. Or using jargons, most of the coding has a strong prior that leads to predictable posterior.
The key question now is: how far can AI go? It started with simple auto-completion, but as AI absorbs more procedural know-how, it becomes capable of generating increasingly larger chunks of maintainable code. Perhaps we are reaching a point where established patterns are so well-understood that AI can bridge the gap between a vague intent and a working system, effectively automating away what Brooks once considered essential complexity.
In the long run, this probably makes experts more valuable, but it’ll gut the demand for standard engineers. So much of our market value is currently tied to how hard it is to transfer expertise among humans. AI renders that bottleneck moot. Once the know-how is commoditized, the only thing left is the what and why.
I like programming by hand too. Like many of us here, I've been doing this for decades. I'm still proud of the work I produced and the effort I put in. For me it's a highly rewarding and enjoyable activity, just like studying mathematics.
Nevertheless, the main motivator for me has been always the final outcome - a product or tool that other people use. Using AI helps me to move much faster and frees up a lot of time to focus on the core which is building the best possible thing I can build.
> But we shouldn't pretend that you will be able to do that professionally for much longer.
Opus 4.5 just came out around 3 months ago. We are still very early in this game. Creating things this year already makes me feel like I'm in the Enchanted Pencil (*) cartoon in which the boy draws an object with a magic pencil and makes it reality within seconds. With the collective effort of everyone involved in building the AI tools and the incentives aligned (as they are right now) the progress will continue be very rapid. You can still code by hand but it will be very hard to compete in the market without the use of AI.
>> For me it's a highly rewarding and enjoyable activity, just like studying mathematics. Nevertheless, the main motivator for me has been always the final outcome
There are two attitudes stemming from the LLM coding movement, those who enjoyed the craft of coding MORE, and those who enjoy seeing the final output MORE.
There's going to be minimal "junior" jobs where you're mostly implementing - I guess roughly equivalent to working wood by hand - but there's still going to be jobs resembling senior level FAANG jobs for the foreseeable future.
Someone's going to have to do the work, babysit the algorithm, know how to verify that it actually works, know how to know that it actually does what it's supposed to do, know how to know if the people who asked for it actually knew what they were asking for, etc.
Will pay go down? Who knows. It's easy to imagine a world in which this creates MORE demand for seniors, even if there's less demand for "all SWEs" because there's almost zero demand for new juniors.
And at least for some time, you're going to need non-trivial babysitting to get anything non-trivial to "just work".
At the scale of a FAANG codebase, AI is currently not that helpful.
Sure, Gemini might have a million token context, but the larger the context th worse the performance.
This is a hard problem to solve, that has had minimal progress in what - 3 years?
If there's a MAJOR breakthrough on output performance wrt context size - then things could change quickly.
The LLMs are currently insanely good at implementing non-novel things in small context windows - mainly because their training sets are big enough that it's essentially a search problem.
But there's a lot more engineering jobs than people think that AREN'T primarily doing this.
If I'm using the right tools for the job, I don't feel like the LLM helps outside of minor autofilling or writing quick one-off scripts. I do use LLMs heavily at work, but that's cause half the time I'm forced to use cumbersome tooling like Java w/ some boilerplatey framework or writing web backends in C++ for no performance reason.
Coding can be a joy and art like. I — speaking for myself — do feel incredibly lonely when doing it alone for long stretches. Its closer to doing graduate mathematics, especially on software that fewer and fewer know how to do well. It is also impossible to find people who would pay for _only_ beautiful code.
I agree with this analogy, as someone who professionally codes and someone who pulls out the power tools to build things around my house but uses hand tools for furniture and chairs.
No job site would tolerate someone bringing a hand saw to cut rafters when you could use a circular saw, the outcome is what matters. In the same vein, if you’re too sloppy cutting with the circular saw, you’re going to get kicked off the site too. Just keep in mind a home made from dimensional lumber is on the bottom of the precision scale. The software equivalent of a rapper’s website announcing a new album.
There are places where precision matters, building a nuclear power plant, software that runs an airplane or an insulin pump. There will still be a place for the real craftsman.
> This is no different then carpentry. Yes, all furniture can now be built by machines. Some people still choose to build it by hand. Does that make them less productive? Yes.
I take issue even with this part.
First of all, all furniture definitely can't be built by machines, and no major piece of furniture is produced by machines end to end. Even assembly still requires human effort, let alone designs (and let alone choosing, configuring, and running the machines responsible for the automable parts). So really a given piece of furniture may range from 1% machine built (just the screws) to 90%, but it's never 100 and rarely that close to the top of this range.
Secondly, there's the question of productivity. Even with furniture measuring by the number of chairs produced per minute is disingenuous. This ignores the amount of time spent on the design, ignores the quality of the final product, and even ignores its economic value. It is certainly possible to produce fewer units of furniture per unit of time than a competitor and still win on revenue, profitability, and customer sentiment.
Trying to apply the same flawed approach to productivity to software engineering is laughably silly. We automate physical good production to reduce the cost of replicating a product so we can serve more customers. Code has zero replication cost. The only valuable parts of software engineering are therefore design, quality, and other intangibles. This has always been the case, LLMs changed nothing.
The nail in the coffin moment for me when i realized AI had turned into a full blown cult was when people started equating a "hand crafted artisinal" piece of software used by a million people with hand crafted artisinal chair used by their grandma.
The cult has its origins in taylorism - a sort of investor religion dedicated to the idea that all economic activity will eventually be boiled down to ownership and unskilled labor.
> If you want to code by hand, then do it! No one's stopping you. But we shouldn't pretend that you will be able to do that professionally for much longer.
Bullshit. The value in software isn't in the number of lines churned out, but in the usefulness of the resulting artifact. The right 10,000 lines of code can be worth a billion dollars, the cost to develop it is completely trivial in comparison. The idea that you can't take the time to handcraft software because it's too expensive is pernicious and risks lowering quality standards even further.
I could use AI to churn out hundreds of thousands of lines of code that doesn't compile. Or doesn't do anything useful, or is slower than what already exists. Does that mean I'm less productive?
Yes, obviously. If I'd written it by hand, it would work ( probably :D ).
I'm good with the machine milled lumber for the framing in my walls, and the IKEA side chair in my office. But I want a carpenter or woodworker to make my desk because I want to enjoy the things I interact with the most. And don't want to have to wonder if the particle board desk will break under the weight of my frankly obscene number of monitors while I'm out of the house.
I'm hopeful that it won't take my industry too long to become inoculated to the FUD you're spreading about how soon all engineers will lose their job to vibe coders. But perhaps I'm wrong, and everyone will choose the LACK over the table that last more than most of the year.
I haven't seen AI do anything impressive yet, but surely it's just another 6mo and 2B in capex+training right?
LLM’s and Agents are merely a tool to be wielded by a competent engineer. A very sophisticated tool, but a tool nonetheless. Maybe it’s because I live in the South East, as far away as I can possibly get from the echo chamber (on purpose), but I don’t see this changing anytime soon.
Not sure why you are so sure that using LLMs will be a professional requirement soon enough.
Eg in my team I heavily discourage generating and pushing generated code into a few critical repositories. While hiring, one of my points was not to hire an AI enthusiast.
If you want to code by hand, then do it! No one's stopping you. But we shouldn't pretend that you will be able to do that professionally for much longer.