So now instead of annoying users with image or audio challenges, websites can annoy users by running up their electricity bills (CPU work aint cheap) and/or denying them access if they [selectively] disable JavaScript and/or block web workers in their browser.
I think it's the lesser evil in terms of privacy and self-hosting if you need anti-spam protection for something. In today's world spam/not protection is becoming more and more necessary, and even with JavaScript this is less intrusive than a service from Google or Cloudflare or something.
Suboptimal and an inefficient use of resources, yes, but possibly the only way to combat bots without privacy intrusive services. I'm open to hearing alternative ideas, though!
Bots will trend towards resembling real users exactly.
All you can really do is make it expensive for a bot to spam requests. Everything else will be identical to humans one day, and in the meantime it's annoying to block legit Tor users or legit scraper bots.
This is practically useless, since desktop computers doing some work can be easily eclipsed by a specialized hardware doing it for spammers and sybil attackers.
It fails to be an automated test to tell computers and humans apart, as computers are more than capable of solving proofs of work without human intervention.
This is true but so are the existing captchas. Existing captchas are just harvesting training data for Google's self driving vehicles at this point.
With a PoW captcha, it doesn't matter how smart you make your algorithm, it's still going to be slow. With existing systems I'd argue it's probably a lot slower for people than for machines, especially since it's people guessing what a machine thinks people would classify an image as.
This is an easy solution for rate limiting low trust/high risk connections and better software isn't going to magically make it any faster. This has always been what captchas aim to accomplish.
Hi, developer here, there is a table showing hashes per second on various devices at the bottom of the readme. My laptop (thinkpad t480s) = 70h/s, my phone (motorolla g7) = 12h/s. Its not so bad on the phone. The site owner can tweak the difficulty for whatever lowest common denominator they want.
If it automatically scales based on current traffic, that might not matter.
You can have it turn itself off during a normal "1 request per minute" day on a small blog and then crank up to "A new CPU needs 2 seconds" during a DDOS.
Use token bucket or leaky bucket or whatever so a few normal users clicking around for 10 minutes won't trigger it, but after a while the server runs out of patience if they keep making requests faster.
I’m not sure what the point really is, they pay pennies to people in Bali to sit around and solve these. Anyone who really wants to get in is going to get in. At best it keeps honest people honest.
Check out this guy on YouTube, he can pretty much open any lock in thirty seconds without causing any physical damage, will change your whole perspective on security.
But that's why this proof of work scheme doesn't make sense.
I assume attacker doesn't need the accounts immediately. I also assume that a real user will wait at most 10 seconds when creating an account on their old underpowered phone.
So the attacker could either wait 27 hours (10*10000 seconds) to do the attack, which for most attacks wont matter much. Or they could use some high powered aws instance that's 100x as powerful as the phone and wait a few mins (aws pricing aint that bad if you just need 5 min of compute time).
Yes it increases "costs" but not by very much and not in a way that scales
PoW should probably be built in to the browser as a standard at some point if it is going to be in widespread use. If a website is trying to stop bots, the bots are at an advantage if they can compute the PoW using optimized C while legitimate customers are computing it in Javascript.
Webasm will help with this. If the browser's JIT is good enough, it'll be close to optimized C.
Then you just need to make sure your algorithm is also space-hard and resists parallelization so GPUs and ASICs can't get it.
Basically it's a password hash, like Argon2. I think libsodium already has an official WebAsm build, so there you go.
Web browsers also have "crypto.subtle" but it's not allowed on file:// (making testing on local difficult) and I don't know if it has password hashing.
Its a HTML meta tag that contains an address where to send/stream money similar to an email address but for value not text. The websites backed ofc revives data about that payment in real time and can change the content of the website based on that.
That last part (about denying access) at least can be fixed: If scripts are disabled (or if the features needed are unimplemented in the browser), or if scripts are enabled but an error occurs when trying to activate the web workers or whatever other features it might use, then it can display a link to the documentation and you can enter the response manually (perhaps copying from an external program (which can even be on a different computer), which if it is native code might be faster than the web page). (If you can have some sort of protocol identification attribute, then this substitution can be done automatically.)
It is true that it can still be annoying for extra CPU work though (and may waste energy), and if they both disabled scripts/workers and also won't or can't do otherwise despite that it will still be denied access.
I wish I could be given the option to just pay instead of solving a captcha. At the end of the day that's what bots end up doing (pay a human to solve the captcha for them), so why not just cut out the middleman, and let me pay the website.
Cryptocurrencies would be a good solve for this. Specifically, a layer 2 network on top of a cryptocurrency like Bitcoin's lightning network or payment channels on Ethereum, both of which allow for subcent transactions with subcent fees.
There are obviously UX challenges to making it easy to acquire the crypto, but I could imagine this starting as an optional alternative to captchas.