Hacker Newsnew | past | comments | ask | show | jobs | submit | bmitc's commentslogin

That sounds like a nightmare.

I think it's most easily summarized by: "It's still important to know things and what was important to know before hasn't really changed". If anything, agentic coding highlights and accensuates the need for good systems and software design knowhow.

> What originally got me excited to build TUIs was the concept of delivering apps over the wire via SSH.

This echoes my main interest in TUIs. Otherwise, I greatly prefer a good desktop application.


What does being in the middle of a metroplex have to do with air and plane traffic incidences? The only thing I can guess is it constraining the airport to grow or remodel itself leading to perhaps inefficiencies.

Old airports have terrible runway design, the runways intersect (to save space) but this is dangerous and requires much more ATC coordination to manage. With modern runway design, if a plane takes off or lands out of sequence, it's unlikely to hit anything. With intersecting runways, that same accident becomes potentially fatal. These airports were also designed for smaller planes, fewer planes, and less passengers.

These issues are obvious to airport management, but airports cannot expand because nearby land is already allocated. The easiest option is to build a new airport, far from existing development.

Most of these airports were originally built far away from the city, but in the past half century the cities expanded so they new envelop the airport.


Less room for error, can't build extra runway(s) to cope with increasing demand. The current Mentour Pilot video actually discusses this issue in some depth re one of Washington DC's airports.

Yea, if anything, it's Apple's normal mode to heavily disturb and move things around.

Agreed. I vaguely remember another HN link that said Apple tried a competing-team approach to building a better siri, but it fell apart due to internal politics reasons?

It's curious how bad people say Azure is. I've never used it, but I've used AWS, and AWS is a gigantic mess. So that makes me concerned if Azure is worse than a gigantic mass.

Azure is worse. These series of posts were posted here not that long ago https://isolveproblems.substack.com/p/how-microsoft-vaporize...

Azure's management APIs break connections coming from outside Azure's network every time they use DNS to execute a blue/green swap on their public load balancers. Existing connections are not gracefully drained. Terraform state gets corrupted (it thinks the operation failed when it actually succeeded and the resource was actually created) and requires manual fixing.

This happened frequently enough at large enough scale we seriously considered building an automation to attempt to analyze the Terraform logs for the connection breaking and automatically import the created resource.

Azure support was completely worthless.


> Azure support was completely worthless.

It's quite incredible that a support bill in the >10k per month range from azure makes the public google (not even GCP with a support contract) support look not crap


AWS is a complex mess, but it’s pretty good at delivering its services reliably. Azure is a mess that is also unreliable.

The .NET team is a counter example, aside from the GUI situation.

> OpenAI is worth many multiples of that.

How?


Because they recently issued shares at a price many multiples of that, and people bought them. How else would you define financial worth?

I would use your number adjusted by some demand elasticity curve.

The "back-of-the-napkin" only has enough room to estimate based on recently issued share price. Seems reasonable to me.

Sure, for napkin level math you can go with this, and multiply by some simple multiplier, I like 70%.

Does speculation equal worth?

Same thing happened with self-driving cars. Oh and cryptocurrencies.

Self-driving had never the amount of compute, research adoption and money than what the current overall AI has. Its not comparable.

Crypto was flawed from the beginning and lots of people didn't understood it properly. Not even that a blockchain can't secure a transaction from something outside of a blockchain.


The LLMs are flawed, and lots of people don't understand them properly.

People are researching how to make LLMs more stable and from a statistic point of view, we already now down to 10% (progress is made here).

LLMs don't have to be perfect, they just need to be as good as humans and cheaper or easier to manage.


> Self-driving had never the amount of compute, research adoption and money than what the current overall AI has. Its not comparable.

$100+ billion in R&D and it's not comparable... hmm


> Self-driving had never the amount of compute, research adoption and money than what the current overall AI has.

And yet they don't do really good jobs with pretty much anything, save for software development, to which people still seem pretty split as far as it being a helpful thing. That's before we even factor in the cost.


I find them very helpful. I use gemini regularly for multiply things.

I also believe that whatever code researchers and other non software engineers wrote before coding agents, were similiar shitty but took them a lot longer to write.

Like do you know how many researchers need to do some data analysis and hack around code because they never learned programming? So so many. If they know how to verify their data (which they needed to know before already), a LLM helps them already.

There is also plenty of other code were perfection doesn't matter. Non SaaS software exists.

For security experts, we just saw whats happening. The curl inventor mentioned it online that the newest AI reports for Security issues are real and the amount of security gaps found are real and a lot of work.

Image generation is very good and you can see it today already everywere. From cheap restaurants using it, to invitations, whatsapp messages, social media, advertising.

I have a work collegue, who is in it for 6 years and he studied, he is so underqualified if you give me his salary as tokens today, i wouldn't think for a second to replace him.


I don't particularly care about coding and didn't weigh in on it. There is no dispute that people debate if it is effective at that. You can take that debate up with them, not me.

Companies are starting this year with an agentic layer. We will see how this will affect broader areas

Yeah and every year before there was another poster telling me the next model iteration would be enough.

The problem here is the adoption curve; Right now it might feel to you that its not worth it or not happening as it might for most people.

Than suddenly one model update moves it from 80% to 85% and now 30% of the market wants to use it.

Then it might be already too late to act like using it to your advantage, being a valuable expert or deciding things long term based on the new state of affairs.


You're in denial and this is cope. The tools aren't even close. The notion that any model is 5% away from doing what has been promised *FOR YEARS* is just facially ridiculous.

>Then it might be already too late to act like using it to your advantage, being a valuable expert or deciding things long term based on the new state of affairs.

There's no universe where this is happening. The tools just are not that good. It's been years of folks like you telling me my job will disappear, but the only thing that this has demonstrated is that the vast majority programmers have *NO IDEA* what other people actually do for a living and how they do it.


The current predictions, and the predictions from a few years ago is more on 2030.

I personally think we are running in a very critical / interesting phase of 5-15 years were we will see how it might affect us.

Besides, it affects already real jobs and real people. Translator, avg/junior graphics designer etc.


>The current predictions, and the predictions from a few years ago is more on 2030.

They weren't back then.

>I personally think we are running in a very critical / interesting phase of 5-15 years were we will see how it might affect us.

That's a nice thought.

>Besides, it affects already real jobs and real people. Translator, avg/junior graphics designer etc.

The predictions were much greater than just translation and graphic design, so again, what's your point?


The point is, a lot of work went into making that happen. I.e., plain text as it is today is not some inherent property of computing. It is a binary protocol and displaying text through fonts is also not a trivial matter.

So my question is: what are we leaving on the table by over focusing on text? What about graphs and visual elements?


TUIs can include these, see the kitty graphics protocol, implemented by most if not all modern terminals.

https://sw.kovidgoyal.net/kitty/graphics-protocol/


I was not very descriptive, but I was referring to the next layer up of building blocks. Instead of text, we could also express things in hybrid ways with text but also visual nodes that can carry more dense information. The usual response is that those things don't work with text-based tools, but that's my point. Text based tools needed invention and decades of refinement, and they're still not all that great.

And what do we gain by leaving things on the table?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: