Hacker Newsnew | past | comments | ask | show | jobs | submit | redox99's commentslogin

Reddit turned way more into an echo chamber over time. The moderators and the downvote system destroyed the site. The shift from free speech, libertarian and anarchist ideology into heavily left leaning definitely didn't help.

> AI programming is fundamentally different from programming

It's really not. Maybe vibecoding, in its original definition (not looking at generated code) is fundamentally different. But most people are not vibe coding outside of pet projects, at least yet.


Hopefully this does not devolve into ‘nuh-uh’-‘it is too’ but I disagree.

Even putting aside the AI engineering part where you use a model as a brick in your program.

Classic programming is based on assumption that there is a formal strict input language. When programming I think in that language, I hold the data structures and connections in my head. When debugging I have intuition on what is going on because I know how the code works.

When working on somebody else’s code base I bisect, I try to find the abstractions.

When coding with AI this does not happen. I can check the code it outputs but the speed and quantity does not permit the same level of understanding unless I eschew all benefits of using AI.

When coding with AI I think about the context, the spec, the general shape of the code. When the code doesn’t build or crashes the first reflex is not to look at the code. It’s prompting AI to figure it out.


This is the same argument that people used to have against compilers

It is not. One version of a compiler on one platform transforms a specific input into an exact and predictable artefact.

A compiler will tell you what is wrong. On top of that the intent is 100% preserved even when it is wrong.

An LLM will transform an arbitrarily vague input into an output. Adding more specification may or may not change the output.

There is a fundamental difference between asking for “make me a server in go that answers with the current time on port 80” and actually writing out the code where you _have to_ make all decisions such as “wait in what format” beforehand. (And using the defaults is also making a decision - because there are defaults)

Compilers have undefined behaviour. UB exists in well defined places.

Even a 100% perfect LLM that never makes mistakes has, by definition, UB everywhere when spec lacks.


Right, they allow for the idea of gradual specification - you can write in broad strokes where you don't care about the details, and in fine detail when you do. Whether the LLM followed the spec or not is mostly down to having the right tooling.

Compilers are an abstraction. AI coding is not an abstraction by any reasonable definition.

You're only thinking that because we're mostly still at the imperative, REPL stage.

We're telling them what to do in a loop. Instead we should be declaring what we want to be true.


You’re describing a hypothetical that doesn’t exist. Even if we assume it will exist someday we can’t reasonably compare it to what exists today.

It exists today, please message me if you’d like to try it

It very much is. It’s more like telling an intern what to do and then reviewing their code. Anyone can do it, and it results in (mostly) slop.

>But most people are not vibe coding outside of pet projects, at least yet.

Major corporations have had outages thanks to AI slop code. Lol the idea that people aren't vibe coding outside of pet projects is hilarious.


The idea that everyone using LLMs is vibe coding is equally hilarious.

If you use an LLM to generate source code you are vibecoding.

You specify the problem in natural language (the vibes) and the LLM spits out source (the code).

Whether you review it or not, that is vibecoding. You did not go through the rigor of translating the requirements to a programming language, you had a nondeterministic black box generate something in the rough general vicinity of the prompt.

Are people seriously trying to redefine what vibecoding is?


> If you use an LLM to generate source code you are vibecoding

No, you're not.

> Are people seriously trying to redefine what vibecoding is?

Yes, you are.


No, that is literally vibecoding. Reviewing vibecoded source is just an extra step. It's like saying "I'm not power toolgardening, I use a pair of gardening scissors afterwards." You still did power tool gardening.

As additional proof, the dictionary definition of vibe coding is "the use of artificial intelligence prompted by natural language to assist with the writing of computer code" [1]

It seems like vibecoders don't like the label and are retconning the term.

[1] https://www.collinsdictionary.com/dictionary/english/vibe-co...


The takes on LLM programming on reddit are hilarious and borderline sad. It's way past the point of denial, now into delusions.

They truly believe LLMs are close to useless and won't improve. They believe it's all just a bubble that will pop and people will go back to coding character by character.


Are there really 70 percent srgb laptops at $600?


Power cycling is not a solution. It's a crappy workaround, and you still had downtime because of it. The device should never get stuck in the first place, and the solution for that is fixing whatever bug is in the firmware.

If they want to reduce support calls, then have more reliable gear.


> Power cycling is not a solution. It's a crappy workaround, and you still had downtime because of it. The device should never get stuck in the first place, and the solution for that is fixing whatever bug is in the firmware.

I'm sympathetic to the argument that companies should make support calls less necessary by providing better products and services, but "just write bug-free software" is not a solution.


This isn't a case where you need bug free software. This is a case where the frequency of fatal bugs is directly proportional to the support cost. Fix the common bugs, then write off the support for rare ones as a cost of doing business.

The effect of cheap robo support is not reducing the cost of support. It is reducing the cost of development by enabling a more buggy product while maintaining the previous support costs.


Giving the device enough RAM to survive memory leaks during heavy usage would also be a valid option, as is automatic rebooting to get the device back into a clean state before the user experiences a persistent loss of connectivity. There are a wealth of available workarounds when you control everything about the device's hardware and software and almost everything about the network environments it'll be operating in. Fixing all the tricky, subtle software bugs is not necessary.


For a community full of engineers, I'm always surprised that people always take absolutionist views on minor technical decisions, rather than thinking of the tradeoffs made that got there.


The obvious trade off here is engineering effort vs. development cost, and when the tech support solution is "have you tried turning it off, then on again?" We know which path was chosen


You can't just throw RAM at embedded devices that you make millions of and have extremely thin margins on. Have you bothered to look at the price of RAM today? At high numbers and low margins you can barely afford to throw capacitors at them, let alone precious rare expensive RAM.


No, XFinity are the ones who decided their routers “““need””” to have unwanted RAM-hungry extra functionality beyond just serving their residential customers' needs. Their routers participate in an entire access-sharing system so they can greedily double-dip by reselling access to your own connection that you already pay them for:

- https://www.xfinity.com/learn/internet-service/wifi

- https://www.xfinity.com/support/articles/xfinity-wifi-hotspo...


We're talking about devices where the retail price is approximately one month of revenue from one customer, and that's if there isn't an extra fee specifically for the equipment rental. Yes, consumer electronics tend to have very thin margins, but residential ISPs are playing a very different game.


A memory leak will consume any amount of ram by definition, adding more ram is not a solution either.


You're implying all software/hardware is of equal quality. I've had many routers with years of uptime, never requiring a reboot.

And I'm sure they had a lot of bugs, but not every bug means hanging to the point of requiring a reboot during normal operation.

Even a proper watchdog would, after some downtime, recover the system.


IME ChatGPT is pretty mid at search. Grok although significantly dumber, is really strong at diligently going through hundreds of search results, and is much more tuned to rely on search results instead of its internal knowledge (which depending on the case can be better or worse). It's the only situation where Grok is worth using IMO.

Gemini is really good with many topics. Vastly superior to ChatGPT for agronomy.

You should always use the best model for the job, not just stick to one.


I'd be friends with you. Wish you had contact info in your profile.


Auto will never work, because for the exact same prompt sometimes you want a quick answer because it's not something very important to you, and sometimes you want the answer to be as accurate as possible, even if you have to wait 10 minutes.

In my case it would be more useful to have a slider of how much I'm willing to wait. For example instant, or think up to 1 minute, or think up to 15 minutes.


That's pretty close to what they have. They just named them Instant, Thinking (Standard), and Thinking (Extended), and they're discrete presets instead of a slider.


But the time it takes is too variable. Even standard can sometimes take 15+ minutes.


They have an "answer now" button that stops the reasoning and starts the reply. Same with Gemini.


Yeah I use that, but it's not really a solution that allows to only have auto. It doesn't help when it chooses Instant instead of Thinking, and it's also much slower than using Instant outright because the Skip button doesn't immediately show, and it's generally slow to restart.


I'm growing so tired of the typical vibecoded UI design. The overuse of cards, icons, emojis, and zero images.


Because it's pretty useful, for example to avoid refreshing data if the tab is unfocused and refresh immediately on focus.


> For a disease which (to my knowledge) can’t be slowed down or reversed

There's Lecanemab and Donanemab. The effects are modest however.


Trontinemab is in trials right now and has 92% of patients achieving low amyloid levels. And more people should be able to take it as it causes less brain swelling (ARIA-E). I'm unaffiliated, I just follow medical research in my free time. But I'm quite hopeful about this medication


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: