Hacker Newsnew | past | comments | ask | show | jobs | submit | Guvante's commentslogin

"They have a long history of long rants attacking people and projects" in response to a long post...

You are very much saying that OP is an attack post.

Or at least implying the point that it is tonally dissonant to claim otherwise.

If you didn't believe it was wrong you would comment on the post but you are explicitly avoiding doing that.


I hate it because typically that style of writing was when someone cared about what they were writing.

While it wasn't a great signal it was a decent one since no one bothered with garbage posts to phrase it nicely like that.

Now any old prompt can become what at first glance is something someone spent time thinking about even if it is just slop made to look nice.

This doesn't mean anything AI is bad, just that if AI made it look nice that isn't inductive of care in the underlying content.


I always felt like humans that were good at writing that way were often doing exactly what the LLM is doing. Making it sound good so that the human reader would draw all those same inferences.

You've just had it exposed that it is easy to write very good-sounding slop. I really don't think the LLMs invented that.


Revisionist at best.

Sure some people could write well but didn't have a clue but they failed to maintain interest since once you realized the author was no good you bounced once you saw their styled blog.

Now they don't care as they only want the one view and likely won't even bother with more posts at the same site.


Exposed, and also dominating the majority of text being “written” every day. Would we say they invented the scaling and spread potential of slop?

> I hate it because typically that style of writing was when someone cared about what they were writing.

I dont understand these takes. The opposite is true - humans good at writing who care about writing never produced these kind of texts.

People who dont care about writing, but need to crank up a lot of words would occasionally produce writing like that. Human slop existed before ai, but it was not the thing produced by people who write well and care.


You are effectively claiming that either:

AI created unprompted the eloquent speech it uses or that AI stole the unpopular style of eloquent speech from people who didn't know what they were talking about.

Neither of which is true because you are mistaking shit posts on social media as what everyone is talking about when discussing "AI posts".

I don't terribly care about replies or other short messages in this context. Wasting 30 seconds isn't worth complaining about.

But wasting 15 minutes trying to build up a mental model of a proposed solution only to realize it never existed is another thing entirely.


"Evil" merged are only evil if your tooling skips over merge commits as "unimportant" which is a common tactic to try and prune some of the crazy trees you get when hundreds of people are making merge commits into a repo which then creates its own commits for automation reasons...

That is kind of the point of Pijul, first class support for "how do I combine these" which git mostly discards as unimportant.

But for a lot of work at scale (or people) mixing bits is important.


Don't address introduce ambiguous locking order across attempts?

While not obviously problematic, that seems weird enough you would need to validate that it is explicitly safe.


If I need to grab 100 locks, they are all moving around a lot, but I've got the first 10, will the order be the same for someome trying to get the same 100? Eg maybe someone swaps two that neither of us has grabbed yet.

That makes sense you could only move locks that are "after" all taken locks

You already have a public IP address the only difference is if you have a rotating IP address which is orthogonal to IPv6.

The only difference is most ISPs rotate IPv4 but not IPv6.

Heck IPv6 allows more rotation of IPs since it has larger address spaces.


IPv6 can "leak" MAC addresses of connected devices "behind the firewall" if you don't have the privacy extensions / random addresses in use.

There are a number of footguns for privacy with IPv6 that you need to know enough to avoid.


Privacy extensions are enabled by default on OSX, windows, android, and iOS: https://ipv6.net/guide/mastering-ipv6-a-complete-guide-chapt...

On Linux, I think the defaults are left up to the distros so there is a chance of a privacy footgun there. Hopefully most distros follow the example set by Apple and Microsoft (a sentence I never thought I would write...)


They are now - I'm not sure when they implemented them but I know Windows at least would do some really stupid stuff very early on.


Aren't we talking about now?

No one is saying we should have activated IPv6 in its first iteration.


All desktop/mobile OSes today use "Stable privacy addresses" for inbound traffic (only if you are hosting something long-term) and "Temporary addresses" for outbound traffic and P2P (video/voice calls, muliplayer games...) that change quickly (old ones are still assigned to not break long-lived connections but are not used for new ones).


NAT only matters in so far as you don't technically need a firewall to block incoming traffic since if it fails a NAT lookup you know to drop the traffic.

But from a security standpoint you can just do the same tracking for the same result. That is just technically a firewall at that point.


Why should C++ stop improving? Other languages don't need C++ to die to beat it.


Half-serious reason: because with each C++ version, we seem to get less and less what we want and more and more inefficiency. In terms of language design and compiler implementation. Are we even at feature-completeness for C++20 on major compilers yet? (In an actually usable bug-free way, not an on-paper "completion".)


The compiler design is definitely becoming more complicated but the language design has become progressively more efficient and nicer to use. I’ve been using C++20 for a long time in production; it has been problem-free for years at this point. It is not strictly complete, e.g. modules still aren’t usable, but you don’t need to wait for that to use it.

Even C++23 is largely usable at this point, though there are still gaps for some features.


gcc seems to have full C++20, almost everything in 23 and and implemented reflection for 26 which is probably the only thing anyone cares about in 26.

https://en.cppreference.com/w/cpp/compiler_support.html

Funny how gcc seems to be the top dog now, what happened to clang? Thought their codebase was supposed to be easier and more pleasant to work with? Or maybe just more hardcore compiler devs work on gcc?


Feature complete is a pretty hard goal to reach. It sounds like "added all the features" but is closer to "bug compatible across compilers" (not saying there are bugs just that recent versions have removed a lot of wiggle room for implementations)

Also modules was a lot and was kind of the reason it took so long. They are wonderful and I want them but proper implementations (even with many details being implementation defined) required a lot of work to figure out.

Most of the time all the compilers get ahead of the actual release but in that case there were so many uncertainties only rough implementations were available beforehand and then post release they had to make adjustments to how they handled incremental compilation in a user facing way effectively.


Relfection was a desperate need. Useful and difficult to design feature.

There are also things like template for or inplace_vector. I think it has useful things. Just not all things are useful to everyone.


Ironically the C++ standards consortium doesn’t want C to improve anymore and wants people to just use C++. Rules for thee but not for me.


Stabilizing C as the language of the operating system and keeping it simple isn't without benefits.

But I do think the frustration that C++ can no longer be a super set of C is overblown by C++.


If investors expect Microsoft profitability that means their stock is worth 1/6th today what it will be in 5 years.

That is a cost of capital estimate of 40%.

Which points to investors not believing the company will be that profitable.

I am not saying investors don't think they will be profitable just they certainly don't believe that profitable.


That isn't lowering prices at all, it is raising prices.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: