Hacker Newsnew | past | comments | ask | show | jobs | submit | more paradite's commentslogin

FYI: It's not an official Claude / Anthropic website.

Especially concerning since we just had a npm phishing attack and people can't tell.


Because it's North Korea and crypto currency is the best assets they can get for pragmatic reasons.

For anything else you need a fiat market, which is hard to deal with remotely.




It’s like a thread of rabid animals replying. So much unbridled entitlement and frustration without any hope of recourse.

I’d almost say it’s hard to understand how people don’t realize that grok has all of the same power and incentive structures behind it as Anthropic’s cloud models.


Grok has Musk behind it and that has ... much worst implications then the background of the other companies. Not that those wpuld be saints, but they are not openly like Musk.


I recently did a livestream on trying to understand attention mechanism (K, Q, V) in LLM.

I think it went pretty well (was able to understand most of the logic and maths), and I touched on some of these terms.

https://youtube.com/live/vaJ5WRLZ0RE?feature=share


The constant scrolling is very distracting. I couldn't follow up


Thanks for the feedback!


I've came to the pain conclusion on Next.js:

It's a bad framework for building full stack apps, but it's better than anything else.

https://x.com/paradite_/status/1941016421934551477


Claude Sonnet 4 is the best coding model. Period. Nothing else comes close.

Anthropic probably has 80% of AI coding model market share. That's a trillion dollar market.


>That's a trillion dollar market

not if you have to constantly expend enormous sums to stay ahead of your competition or otherwise you lose your edge. It's not the best coding model because they got some mystical treasure in their basement. It's so rapidly becoming a commodity that at some point Microsoft or Google will just offer just as good a model for free and like search they'll just start milking people with ads.

That's likely one of the reasons for the shifting privacy stances, not just for training but because monetization of the product itself is probably looking pretty dim in the long run.


There is automatic code indexing from Cursor.

Autocomple is also automatically triggered when you place your cursor inside the code.


Yes, Cursor, “The AI Code Editor.”


Cursor is an AI IDE and not what they are describing.


Claude models have been weaker in vision tasks compared to models from OpenAI and Google.

https://eval.16x.engineer/evals/image-analysis

For them to roll out a browser extension must mean that they have found a walkaround or alternative method to solve the vision performance.


It won't work because of too many false positives. People are already trained to ignore warnings, like how they blindly accept T&C without reading.


If a giant red warning saying 'THIS APP MAY BE MALWARE' doesn't stop someone, then they've either made an informed choice to proceed or it's willful negligence. In other words, users aren't 'trained' to ignore warnings; they're simply being willfully negligent.


It’s because on the other side of that warning is a cracked version of Spotify that removes the adverts.

The user can’t make an informed choice because it’s literally impossible to audit the safety of the app or the author. So they will click passed any warnings, follow any number of steps to install the app that gives them something desirable for free.


So what?

Those same users can now install facebook, and facebook does this: https://medium.com/@ak123aryan/facebooks-hidden-android-trac...

And facebook is and will be verified in the future too.


As someone who is usually careful I too have found myself clicking past warnings and error notifications in recent times, mainly because I want to do something and the software is actively preventing me from doing that. It isn't negligence, it is just wanting to get something done and not having the time or the nerves to carefully read through and think about messages, dialogs, and screens.

Back in the early days of the Internet there was the Joel Spolsky article on why users will always do anything to see the dancing bunnies.


It doesn’t matter what adjectives you apply to them - they do it and they’ll do it again. Most people are not equipped to evaluate the veracity of that statement, and if a few good apps don’t register with Google (that these will exist is the whole reason this move is problematic at all, right?) and ask you to click through on the website or whatever, they’ll get used to touching the stove and not getting burned.

c.f. the Windows “it could be malware” blurb. You basically can’t use any software from a small publisher without clicking through it, even if they pay for the code signing certificate.


But then you get situations like, "THIS PRODUCT MAY CAUSE CANCER," being cautioned everywhere, with no distinction between, "this is certainly harmful," and "we just haven't verified it isn't harmful".


Have you met a human before? Most will simply click past anything that’s impeding their immediate goal.


The fact that you don't even realise why that wouldn't work is kind of telling.

> users aren't being 'trained' to ignore warnings

Of course they are. Every time they click "continue anyway" and it actually isn't malware (which is 99% of the time) they are being trained that the warning is nonsense.

And they're right! What use is a warning that an app might be malware, if a) it actually isn't almost every time you see the warning, and b) you have no way of telling if it is or isn't anyway?

I hate this move too and I don't think they should have done "just make the warning even bigger!" is obviously dumb.


There aren't too many false positives, it's just that most modern android software is malware.

Saying "this will steal your data" is probably correct.

So what were actually asking users is to install some malware, if it's provided by a big enough tech company, but not other malware. Of course users get confused.

Just stop downloading apps altogether and run the web views in the original web view - the web browser.

Will Google, Meta et al. do that and abandon their apps? Of course not, they need to install malware.


The way we allow paternalistic tech companies to train the consumer to abdicate personal responsibility is going to bite us in the ass sooner or later. I'm betting on sooner.


Then make the false positives lower. The problem is they aren't incentivized to improve such features because, where's the money in that?


How about requiring the user to type into a text box "App Foo might be malware. I want to install it anyways."? And disable copy and paste for that box.


Maybe they shouldn't offer a "OK" button that the stupid user can blindly click. They could tell you, "this app is dangerous, go to system settings to enabled" and a "Dismiss" button.


I'll point to Windows Vista that went all in on this kind of security, even giving you a big warning if you tried to change your background. The computer magazines quickly published guides on how to change a slider or registry setting to reduce the amount of stupid warnings, and the people were quickly trained to ignore and just hit OK on these screens.

Anyway, Apple already does this with unknown apps downloaded from the internet, you need to go to security settings and hit a button there.


Lmao, that is literally how it worked.


This is something laughable that Apple does. Anytime you install something from Github it'll make you click a few extra boxes. And their tightening down of things also ends up making people look for third party software in the first place. All this really does is, like you said, teach people to ignore warnings.


That's just their first step. They will remove the extra boxes eventually. They already removed option-click as a workaround.


Is it possible to install stuff from GitHub on iOS? I thought it was completely impossible on apple devices.


I was referring to OSX but if you didn't know there's a current European lawsuit going on about doing exactly this for iOS


It is, but you have to reinstall it every week.


> It is, but you have to reinstall it every week.

I'd greatly appreciate it if you can share the relevant link/repo for it?


https://altstore.io is the big one. You might want the AltStore fork SideStore (you can do the weekly reinstall without a computer, https://sidestore.io). Other tools exist, like https://sideloadly.io and https://appdb.to.


You use sideloadly to install any ipa you want. If you don't have a developer account it will sign the application using a key with the validity of seven whole days! (instead if you have a developer account it will be valid one year, and don't forget to pay the 99€/year ransom)



Great way to make users hate their own devices.


There was a workaround using an enterprise certificate, but I believe Apple stopped that for misuse of the enterprise program.


Hey. I like your roast on benchmarks.

I also publish my own evals on new models (using coding tasks that I curated myself, without tools, rated by human with rubrics). Would love you to check out and give your thoughts:

Example recent one on GPT-5:

https://eval.16x.engineer/blog/gpt-5-coding-evaluation-under...

All results:

https://eval.16x.engineer/evals/coding


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: