Hacker Newsnew | past | comments | ask | show | jobs | submit | comandillos's commentslogin

Same, and I also read Netherlands instead of Neanderthals.

Yeah me to!

https://en.wikipedia.org/wiki/De_Rat%2C_IJlst

>De Rat (English: The Rat) is a smock mill in IJlst, Friesland, Netherlands, which was originally built in the seventeenth century at Zaanstreek, North Holland. In 1828 it was moved to IJlst, where it worked using wind power until 1920 and then by electric motor until 1950. The mill was bought by the town of IJlst in 1956 and restored in the mid-1960s. Further restoration in the mid-1970s returned the mill to full working order. De Rat is working for trade and is used as a training mill. The mill is listed as a Rijksmonument (No. 39880).[1]


Such a pity remote dev containers are critical for me. I guess some SSH tunneling could help with it...

Umm… zed supports remote dev over ssh… what’s your concern?

And Zed even supports Dev Container

It seems not both at the same time, I just tried to open a dev container over ssh with 1.0 and didn't work

I don't know why everyone praises GPT 5.4 while Opus 4.5 and onwards are way better for me on complex stuff, i.e. reverse engineering, implementing low-level protocols, interpreting datasheets and specs... I've using Codex for a while and although the app itself its great, the model sometimes takes approaches that do not make any sense.

GLM is really good for the size and price. I've using Big Pickle on OpenCode and its pretty impressive what it can achieve for being free.


I've been using Qwen3.5-35B-A3B for a bit via open code and oMLX on M5 Max with 128Gb of RAM and I have to say it's impressively good for a model of that size. I've seen a huge jump in the quality of the tool calls and how well it handles the agentic workflow.


This is about the newly release Qwen3.6. Just wanted to make sure you got that correctly.


Quite scared by the fact that the original issue pointing out the actual root cause of the issue has been 'Closed as not planned' by Anthropic.

https://github.com/anthropics/claude-code/issues/46829


The response doesn't even make sense and appears to be written by AI.

> The March 6 change makes Claude Code cheaper, not more expensive. 1h TTL for every request could cost more, not less

Feels very AI. > Restore 1h as the default / expose as configurable? 1h everywhere would increase total cost given the request mix, so we're not planning a global toggle.

They won't show a toggle because it will increase costs for some unknown percentage of requests?


Sounds like a decision I would make when memory is expensive and you want to get rid of the very long (in time) tail of waiting 1h to evict cache when a session has stopped.

There must be a better way to do this. The consumer option is the pricing difference. If they’d make cache writes the same price as regular writes, that would solve the whole problem. If you really want to push it, use that pricing only for requests where number of cache hits > 0 (to avoid people setting this flag without intent to use it), and you solved the whole issue.


Memory is expensive? If reads are as rare as they claim you can just stash the KV-cache on spinning disk.


Aren’t those latency sensitive though?


When a casino is making a lot of money from gamblers, they don't care about their customers losing money, given the machines are rigged against you.

Anthropic sells you 'knowledge' in the form of 'tokens' and you spend money rolling the dice, spinning the roulette wheels and inserting coins for another try. They later add limits and dumb down the model (which are their gambling machines) of their knowledge for you to pay for the wrong answers.

Once you hit your limit or Anthropic changes the usage limits, they don't care and halt your usage for a while.

If you don't like any of that, just save your money and use local LLMs instead.


Why scared? Like, if theit software gets bad, we stop using it.


Maybe scared wasn't the best word... but we cannot deny Opus is a great - if not greatest - model at coding and Anthropic is the only one serving it a reasonable prices when going through their subscription model.


Sounds like an addiction to me


I mean this is blatantly false. Codex just rolled out a $100 a month plan with higher usage and lower quotas than Claude and GPT 5.4 is more capable than Opus 4.6. At least for the systems work I do.

And if you can't stomach OpenAI, GLM 5.1 is actually quite competent. About Opus 4.5 / GPT 5.2 quality.


how have you coded before the era of llms?


In my case T&C on using inout/output is so bad in almost Lal the other providers, that I'm forbidden from using them for work (and it doesn't make sense to pay a separate sub if I have basically two at this point, one direct with Anthropic, one via github.com copilot).


This is still far away from being viable for actually useful models, like bigger MoE ones with much larger context windows. I mean, the technology is very promising just like Cerebras, but we need to see whether they are able to keep up this with the evolution of the models to come in the next few years. Extremely interesting nevertheless.


Keep in mind though that if you can run a model at 100-1000x the speed, then even if the model is less capable the sheer speed of them may make you do more interesting things (like deep search explorations with LLM-guided heuristics).


just another piece to this jenga tower called c++. if you want reflection maybe just use a language that was designed with reflection support since the beginning.


To me this is just another marketing stunt where the company wants to build a public image so their customers trust them (see Apple), but then as always who knows what will happen behind the scenes. Just see when most major US companies had backdoors on their systems providing all data to the NSA, i.e. PRISM.


>just another marketing stunt

What evidence on _Amodei_ and his actions leads to that conclusion?


Anthropic's policy is full of contradictions. They are against mass-surveillance of Americans but they happily deal with Palantir. They talk about humanity as a whole but only care about what American companies use their models to do to Americans; everybody else is fair game for AI-driven surveillance. They warn of the dangers of AI-driven warfare by demonstrating a mass-scale cyberattack perpetrated using their model, Claude, as the main operation engine and immediately release a new, more powerful version of Claude. You just need to use Claude to protect yourself from Claude, see.

When you really start digging into it, it appears schizophrenic at first, and then you remember market incentives are a thing and everything falls into place.


>Anthropic's policy is full of contradictions. They are against mass-surveillance of Americans but they happily deal with Palantir.surveillance of Americans but they happily deal with Palantir.

Palantr will also be subject to the same contractual limitations as the DoD.

>They talk about humanity as a whole but only care about what American companies use their models to do to Americans; everybody else is fair game for AI-driven surveillance.

The stated red lines are about mass domestic surveillance and fully autonomous lethal weapons - and those are the kinds of restrictions you’d expect to apply to any government using the tech on its own population, not just the US.

While For American agencies to use Anthropic's models against other sovereign states requires the access to the raw data from that state which is somewhat of a practical firebreak. Pragmatically, Amodei is an American citizen heading an American company in America; why give the current regime additional reasons to persecute them and risk seizing control of the technology for their friends?

> They warn of the dangers of AI-driven warfare by demonstrating a mass-scale cyberattack perpetrated using their model, Claude, as the main operation engine and immediately release a new, more powerful version of Claude. You just need to use Claude to protect yourself from Claude, see.

What is the realistic alternative? sit quietly and pretend scaling isn't a thing and dual use does not exist? Try and pause/stop unilaterally while money floods into their arguably less scrupulous competitors?

Nobody knows if Anthropic's efforts will make much difference, but at least it is refreshing to see a technology company and its leader try to stand up for some principles.


> Palantr will also be subject to the same contractual limitations as the DoD.

Well, first of all, we don't actually know that. Second, I'm going to question the commitment of any company to the principles of democracy and AI safety if one of their bigger partnership is with a literal mass surveillance, Minority-Report-crap company. It's the most confusing business partner to see when you're positioning your company as THE ethical one. If you're dealing with Palantir, you're helping mass surveillance, full stop, because that's what this company does. Which country's citizens get the short end of it is completely irrelevant (though in all likelihood it's still Americans because that's Palantir's home turf).

> Pragmatically, Amodei is an American citizen heading an American company in America; why give the current regime additional reasons to persecute them and risk seizing control of the technology for their friends?

If that's how we characterize the current regime (which I actually agree with), then how come he's proactively trying to help it, deal with it, and insist it's a democracy that needs to be "empowered"? Sounds backwards to me. When you're about to be persecuted by your own government for not allowing it to use your models to do some heinous shit, this sounds like exactly the kind of government you shouldn't be helping at all (and ideally not do business where it can reach you). This is not normal.

> What is the realistic alternative? [...] Try and pause/stop unilaterally while money floods into their arguably less scrupulous competitors?

If you notice that you're doing harm and you're concerned about doing harm, stop doing harm! Don't make it worse! "If I hadn't pulled the trigger, somebody else would" is a phrase you wouldn't expect to hold up in court. Similarly, racing to the bottom to be the most compassionate, self-conscious, and financially successful scumbag is the least convincing motivation imaginable. We will kill you quickly and painlessly unlike those other, less scrupulous guys! Logic like this absolves bad actors from any responsibility. The amount of harm stays the same but some of it gets whitewashed and virtue-signalled, and at the very minimum I'd expect the onlookers like ourselves not to engage in that.

> Nobody knows if Anthropic's efforts will make much difference, but at least it is refreshing to see a technology company and its leader try to stand up for some principles.

These aren't principles. What he's doing here is a free opportunity for incredible PR and industry support that he's successfully taken advantage of. The actual policy backslides, caveats, and all the lines that had been crossed prior will not receive as much press as the heroic grandstanding of a humble Valley nerd against Pentagon warmongers. Nobody will actually take the time to read the statement and realize how the entire text is full of lawyer-approved non-committal phrasing that leaves outs for any number of future revisions without technically contradicting it. I've already pointed some of it out earlier in the thread. The technology for autonomous weapons isn't reliable enough for use, gee, thanks! I feel so much safer now knowing that Dario will have no qualms engaging with it as soon as he deems it reliable enough.


You know, once the lawyers get involved, there are no contradictions because they define every term and then it makes all the sense in the world.

If Humaity=America, then obviously they don’t care about the rest of the people as a very very silly example.


You call it silly, I call it an accurate reading!


Wild that my Huawei phone running Harmony OS allows to you customize the search engines in the default browser and iOS does not.


My pixel phone running stock android lets me change the search engine in Chrome[0] as well. It's kind of crazy that Apple still locks this down.

[0] Not on the home screen, but I'll take what I can get.


This isn't entirely true, isn't it? I mean, the whole internet runs on a PKI and we need such a mechanism to ensure secure communication across devices in the network. I understand home devices that contain all sort of sensors and actuators should be handled in a similar fashion, isn't it?

I mean, that PKI doesn't exclude non-approved manufacturers from producing Matter devices, you can always trust their PAA (their CA) in your border router if it's not a well-known manufacturer. And also, I am pretty sure that if this is the case the Matter border router should warn you of this and ignore the fact that the PAA is not in the local store of root CAs (as we did in the times when we had https without Let's Encrypt and didn't want to pay Comodo to sign our certs)


You’re partially correct, but you’ve got enough details wrong details that you’re misrepresenting reality.

Matter has a public blockchain with certificates added to enforce which products are considered certified. This is called the distributed compliance ledger (DCL). The hardware devices are expected to ship with certificates on them that match the public ones, and it’s generally not possible to change the on-device certs.

This is distinct from “normal” internet PKI certificate authority where you can just swap out a few files or grab a new cert from Let’s Encrypt. Because this uses a dedicated blockchain with a history of signatures. Depending on how you want to control the device, you’d need to rebuild the whole chain of trust, including eg signatures from Google or Apple.

Also, from a practical perspective, I’m not sure of any actual controllers that let you point to different certificate sources. You can create devices with a “test vendor ID” (0xFFFF) and the controllers are supposed to ignore certs. This has downsides, like OTA updates require signing, you can’t encode proper identifiers in the device so info pages in apps are wrong, etc.

Also, the “border router” isn’t really the point of trust here, it’d be the actual controller device. A border router is just that, an IP router, like a WiFi router or a Ethernet router.

https://webui.dcl.csa-iot.org/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: