I always felt like humans that were good at writing that way were often doing exactly what the LLM is doing. Making it sound good so that the human reader would draw all those same inferences.
You've just had it exposed that it is easy to write very good-sounding slop. I really don't think the LLMs invented that.
Sure some people could write well but didn't have a clue but they failed to maintain interest since once you realized the author was no good you bounced once you saw their styled blog.
Now they don't care as they only want the one view and likely won't even bother with more posts at the same site.
> I hate it because typically that style of writing was when someone cared about what they were writing.
I dont understand these takes. The opposite is true - humans good at writing who care about writing never produced these kind of texts.
People who dont care about writing, but need to crank up a lot of words would occasionally produce writing like that. Human slop existed before ai, but it was not the thing produced by people who write well and care.
AI created unprompted the eloquent speech it uses or that AI stole the unpopular style of eloquent speech from people who didn't know what they were talking about.
Neither of which is true because you are mistaking shit posts on social media as what everyone is talking about when discussing "AI posts".
I don't terribly care about replies or other short messages in this context. Wasting 30 seconds isn't worth complaining about.
But wasting 15 minutes trying to build up a mental model of a proposed solution only to realize it never existed is another thing entirely.
"Evil" merged are only evil if your tooling skips over merge commits as "unimportant" which is a common tactic to try and prune some of the crazy trees you get when hundreds of people are making merge commits into a repo which then creates its own commits for automation reasons...
If I need to grab 100 locks, they are all moving around a lot, but I've got the first 10, will the order be the same for someome trying to get the same 100? Eg maybe someone swaps two that neither of us has grabbed yet.
On Linux, I think the defaults are left up to the distros so there is a chance of a privacy footgun there. Hopefully most distros follow the example set by Apple and Microsoft (a sentence I never thought I would write...)
All desktop/mobile OSes today use "Stable privacy addresses" for inbound traffic (only if you are hosting something long-term) and "Temporary addresses" for outbound traffic and P2P (video/voice calls, muliplayer games...) that change quickly (old ones are still assigned to not break long-lived connections but are not used for new ones).
NAT only matters in so far as you don't technically need a firewall to block incoming traffic since if it fails a NAT lookup you know to drop the traffic.
But from a security standpoint you can just do the same tracking for the same result. That is just technically a firewall at that point.
Half-serious reason: because with each C++ version, we seem to get less and less what we want and more and more inefficiency. In terms of language design and compiler implementation. Are we even at feature-completeness for C++20 on major compilers yet? (In an actually usable bug-free way, not an on-paper "completion".)
The compiler design is definitely becoming more complicated but the language design has become progressively more efficient and nicer to use. I’ve been using C++20 for a long time in production; it has been problem-free for years at this point. It is not strictly complete, e.g. modules still aren’t usable, but you don’t need to wait for that to use it.
Even C++23 is largely usable at this point, though there are still gaps for some features.
Funny how gcc seems to be the top dog now, what happened to clang? Thought their codebase was supposed to be easier and more pleasant to work with? Or maybe just more hardcore compiler devs work on gcc?
Feature complete is a pretty hard goal to reach. It sounds like "added all the features" but is closer to "bug compatible across compilers" (not saying there are bugs just that recent versions have removed a lot of wiggle room for implementations)
Also modules was a lot and was kind of the reason it took so long. They are wonderful and I want them but proper implementations (even with many details being implementation defined) required a lot of work to figure out.
Most of the time all the compilers get ahead of the actual release but in that case there were so many uncertainties only rough implementations were available beforehand and then post release they had to make adjustments to how they handled incremental compilation in a user facing way effectively.
You are very much saying that OP is an attack post.
Or at least implying the point that it is tonally dissonant to claim otherwise.
If you didn't believe it was wrong you would comment on the post but you are explicitly avoiding doing that.
reply