Hacker Newsnew | past | comments | ask | show | jobs | submit | ashleyn's commentslogin

PSA: Adblock is non optional for personal and enterprise security.

My gf told me they blocked all addons at work, including adblock. Told her to recommend to the IT department that adblock be mandatory on all computers. Ad networks make too much money not to look the other way on malvertising.


It's probably a mix of AI productivity boost and market cycle. There is some substance to AI job loss, but I believe jevons paradox will eventually catch up to transformer-based LLM capabilities.

I'm the last remaining frontend developer after multiple rounds of layoffs. With claude code I'm able to do 2x-3x the work I was able to do before it existed. It's hard for me to rationally argue we need more frontend developers.


> It's hard for me to rationally argue we need more frontend developers.

What about when you need a day off, or when you quit unexpected?


It's a fools errand to ever believe in job security. Even if you're absolutely right on your importance, management can always remain stupid longer than you can remain employed.


It truly feels like a farce. I've survived a few rounds of layoffs and I've seen both shitty engineers and our best engineers let go. I assume our good ones are too expensive? It seems like HR just does some spreadsheet magic to grab people without any regard for performance or title.


I'd probably view LLM advice like the blind spot indicator on my car. Trust when it's lit. Don't trust when it's not lit.


This was a very common thing media companies dealt with and still deal with. There are too many legal risks in even reading the idea. SOP is to send back the envelope sealed and with a canned response explaining that they don't accept pitches from the public.


I can't remember what the topic was, but I remember hearing a story about a company that was soliciting ideas from the public for maybe a joke book or maybe tv show plots. They got into a lot of legal hot water once they found out that the ideas weren't original and people were actually just taking them from other sources.

If anyone else knows what I am talking about, I'd like to know the name of the company.


How do they know what they are not reading if the envelope is still sealed?


they have to open the envelope to see what's inside - they get mail that is not ideas and they have to open it.

But I assume the people who get the mail are trained to see if the envelope contains ideas to stop reading and return the mail with the canned lawyer response.


I often think about how Ask Jeeves had the last laugh in the age of LLM-powered search.


I feel like "game engine" is a misnomer for what we're actually dealing with here. It's more like an "ECS-based scene rendering engine, which can be used for games or for advanced UI". But that doesn't have a succinct label yet.


I think "game engine" is a pretty succinct label for that. :)


This is the central problem with Citizens United. The supreme court tends to be unusually deferential with 1A cases and ruled that infinite money can go into formally unaffiliated PACs. Undoing this would require activist judges or a constitutional amendment.


Activist judges?

The supreme court is majority activist judges. Why cant new judges undo the old activist judges wrongly decided law? Why are the other new judges suddenly activists?


In the case of Citizens United, it's actually a pretty straightforward case. Without a constitutional amendment, it would take a very unorthodox reading of the first amendment.

The "problem" with Citizens United is that it's a very clear case.


Corporations are amoral immortals who cannot be placed behind bars. Therefore they should never be given the rights of human beings.


They don't have the rights of human beings. Humans don't lose their rights because they are in a corporation, that is the outcome of Citizens United.

"A corporation is people" is the singular of "corporations are people". Anyone saying anything different is lying or misinformed.

Think about all the times someone who definitely knew better implied that it meant a corporation is a person and trust them less.


Better question: What if we actually punished perpetrators of threats and doxing with the existing laws we have against terroristic threats? Why do we treat this as some unstoppable force of nature when the vast majority of them come through traceable methods like mail or phone?


Why not both?


I'm guessing this "humanizer" actually does two things:

* grep to remove em dashes and emojis

* re-run through another llm with a prompt to remove excessive sycophantry and invalid url citations


You’re absolutely right!

Ha. Every time an AI passionately agrees with me, after I’ve given it criticism, I’m always 10x more skeptical of the quality of the work.


Why? The AI is just regurgitating tokens (including the sycophancy). Don't anthropomorphise it.


Because I was only 55% sure my comment was correct and the AI made it sound like it was the revelation of the century


Because of the way regurgitation works. "You're absolutely right" primes the next tokens to treat whatever preceded that as gospel truth, leaving no room for critical approaches.


For student assignment cheating, only really the em dashes would still be in the output. But there are specific words and turns of phrases, specific constructions (e.g., 'it's not just x, but y'), and commonly used word choices. Really it's just a prim and proper corporate press release style voice -- this is not a usual university student's writing voice. I'm actually quite sure that you'd be able to easily pick out a first pass AI generated student assignment with em dashes removed from a set of legitimate assignments, especially if you are a native English speaker. You may not be able to systematically explain it, but your native speaker intuition can do it surprisingly well.

What AI detectors have largely done is try to formalize that intuition. They do work pretty well on simple adversaries (so basically, the most lazy student), but a more sophisticated user will do first, second, third passes to change the voice.


No. No one is looking for em-dashes, except for some bozos on the internet. The "default voice" of all mainstream LLMs can be easily detected by looking at the statistical distribution of word / token sequences. AI detector tools work and have very low false negatives. They have some small percentage of false positives because a small percentage of humans pick up the same writing habits, but that's not relevant here.

The "humanizer" filters will typically just use an LLM prompted to rewrite the text in another voice (which can be as simple as "you're a person in <profession X> from <region Y> who prefers to write tersely"), or specifically flag the problematic word sequences and ask an LLM to rephrase.

They most certainly don't improve the "correctness" and don't verify references, though.


providers are also adding hidden characters and attempting to watermark if memory serves.


It's more complex than that. It's called SynthID-text and biases the probabilities of token generation in a way that can be recovered down the line.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: