Hacker Newsnew | past | comments | ask | show | jobs | submit | Maxion's commentslogin

To be fair, 7.9 inches is quite a bit bigger than 7 inches. That's ~30% more screen area.

Not a car engineer, but those motors can be pretty high A, so this could also just be a feature that helps the starter get as much power as it can while cranking.

> All that is to say, this space is ripe for some open hardware/software love.

There's just so many computers and what-not in modern cars that this is a very tall ask. You'd need a project on-par with HomeAssistant to get anywhere.


Yeah, it seems like more modern technology has settled on standard protocols (maybe a naive impression--someone will shout at me if that's the case) but there's probably a very long tail of bizarre false starts if you want full coverage of models back to the early 90s when computers became more commonplace.

After 2006/2007 nearly everyone did CAN. I think that is even mandatory in the US, though I have no clue how to look that up (I assume there are details and exceptions) However before then everyone did their own thing. Often with custom chips that haven't been made since 2004 (or even 1999): good luck finding one that works if it breaks. CAN is cheap and allows a lot of power while hiding most of the protocol complexity. The things before that were often not as powerful as CAN, while being in practice a lot more complex because the complexity wasn't hidden.

Someone in our org starred our main repo, so for me this isn't true.

A lot of defense spending revolves around overall manufacturing capacity. Deals contain options that won't be executed unless it's war time. These options increase the cost of the deal as the manufacturer needs to keep capacity.

AFAIK there's no murdering of citizens going on in any EU member country by the same countries government at the moment.

[flagged]


for example?

The other problem is that this is a bit of a circular path, with deps being so crap and numerous, upgrading existing old projects become a pain. There are A LOT of old projects out there that haven't been updated simply because the burden to do so is so high.

This sentiment is all well and good, but when you end up in a new-to-you JS codebase with a list of deps longer than a Costco receipt using some ancient Webpack with it's config split into 5 or so files, then no-one is letting you upgrade to vite unless the site is completely down.

It's almost like Churchill's quip "He has all the virtues I dislike and none of the vices I admire". In other words, in some ways the JS ecosystem rushes to all the tech debt inducing "shiny shiny" and avoids all the tech debt reducing "hard work of refactoring and wisdom". It's almost like a large chunks of the JS ecosystem thrives on "the dopamine hit". Santayana's wisdom whispers behind every import.

Sad but true...

Did this for a project in 2022. Haven't had any drama related to CVEs, hadn't had any issues related to migration from some version of something to another.

The client has not had to pay a cent for any sort of migration work.


There are certainly security benefits to keeping things in-house. Less exposure to supply-chain attacks (e.g. shai-hulud malware) and widespread security bugs (e.g. react server components server-side RCE). Plus it's much easier to do a complete audit and threat model of the application when you built and understand everything soup-to-nuts.

Of course, it also means you have to be cautious about problems that dependencies promise to solve (e.g. XSS), but at the same time, bringing in a bunch of third-party code isn't a substitute for fully understanding your own system.


Is the lack of CVE because the implementations you wrote are better written and safer than those in the standard libraries or because no one has checked?

Presumably the latter. However, mindlessly bumping package versions to fix bullshit security vulnerabilities is now industry standard practice. Once your client/company reaches a certain size, you will pretty much have to do it to satisfy the demands of some sort of security/compliance jarl.

And yet npm install [package with 1000 recursieve dependencies] is not considered a supply chain risk at all to those security/compliance jarls.

Let alone having to check all licenses...


Well there's probably far less attack surface.

Very laudable, though this is probably also part of the issue: If the client doesn't need any migration work, the dev doesn't get more money, which in turn might be phrased: "It is difficult to get a man to understand something, when his salary depends upon his not understanding it!" -- by someone other than me.

I have worked at employer, where one could have done the frontend easily in a traditional server side templating language since most of the pages where static information anyway and very little interactive. But instead of doing that and have 1 person do that, making an easily accessible and standard-conforming frontend, they decided to go with nextjs and required 3 people fulltime to maintain this, including all the usual churn and burn of updating dependencies and changing the "router" and stuff. Porting a menu from one instance of the frontend to another frontend took 3 weeks. Fixing a menu display bug after I reported it took 2 or 3 months.


> The client has not had to pay a cent for ...

From human society's PoV, you sound like a 10X engineer and wonderful person.

But from the C-suite's PoV ...yeah. You might want to keep quite about this.


It's nice to sidestep the relative brittleness of web implementations simply because of versions.

If LLMs turn out to be such a force multiplier, the way to fight it is to ensure that there are open source LLMs.

I think the issue is that LLMs are a cash problem as much as they are a technical problem. Consumer hardware architectures are still pretty unfriendly to running models which are actually competitive to useful models so if you want to even do inference on a model that's going to reliably give you decent results you're basically in enterprise territory. Unless you want to do it really slowly.

The issue that I see is that Nvidia etc. are incentivised to perpetuate that so the open source community gets the table scraps of distills, fine-tunes etc.


You got me thinking that what's going to happen is some GPU maker is going to offer a subsidized GPU (or RAM stick, or ...whatever) if the GPU can do calculations while your computer is idle, not unlike Folding@home. This way, the company can use the distributed fleet of customer computers to do large computations, while the customer gets a reasonably priced GPU again.

The kinds of GPUs that are in use in enterprise are 30-40k and require a ~10KW system. The challenge with lower power cards is that 30 1k cards are not as powerful, especially since usually you have a few of the enterprise cards in a single unit that can be joined efficiently via high bandwidth link. But even if someone else is paying the utility bill, what happens when the person you gave the card to just doesn’t run the software? Good luck getting your GPU back.

Consumer hardware is there. grab a mac or AMD395+ and Qwen coder and Cline or Open code and you're getting 80% of the real efficiency.

New Strix Halo (395+) user here. It is very librating to be able to "just" load the larger open-weight MoEs. At this param count class, bigger is almost always better --- my own vibe check confirms this, but obviously this is not going to be anywhere close to the leading cost-optimized closed-weight models (Flash / Sonnet).

The tradeoff with these unified LPDDR machines is compute and memory throughput. You'll have to live with the ~50 token/sec rate, and compact your prefix aggressively. That said, I'd take the effortless local model capability over outright speed any day.

Hope the popularity of these machines could prompt future models to offer perfect size fits: 80 GiB quantized on 128 GiB box, 480 GiB quantized on 512 GiB box, etc.


The problem is even if an OSS had the resources (massive data centers the size of NYC packed with top end custom GPU kits) to produce the weights, you need enormous VRAM laden farms of GPUs to do inference on a model like Opus 4.6. Unless the very math of frontier LLMs changes, don’t expect frontier OSS on par to be practical.

I feel like you're overstating the resources required by a couple orders of magnitude. You do need a GPU farm to do training, but probably only $100M, maybe $1B of GPUs. And yes, that's a lot of GPUs, but they will fit in a single datacenter, and even in dollar terms, there are many individual buildings in NYC that are cheaper.

I refer you to the data centers under construction roughly the size of Manhattan to do next generation model training. Granted they’re also to house inference, but my statement wasn’t hyperbole, it’s based on actual reality. To accommodate the next generation of frontier training it’s infeasible for any but the most wealthy organizations on earth to participate. OSS weights are toys. (Mind you i like toys)

> you need enormous VRAM laden farms of GPUs to do inference on a model like Opus 4.6.

It's probably a trade secret, but what's the actual per-user resource requirement to run the model?


There's already an ecosystem of essentially undifferentiated infrastructure providers that sell cheap inference of open weights models that have pretty tight margins.

If the open weights models are good, there are people looking to sell commodity access to it, much like a cloud provider selling you compute.


Open-source models will never be _truly_ competitive as long as obtaining quality datasets and training on them remains prohibitively expensive.

Plus, most users don't want to host their own models. Most users don't care that OpenAI, Anthropic and Google have a monopoly on LLMs. ChatGPT is a household name, and most of the big businesses are forcing Copilot and/or Claude onto their employees for "real work."

This is "everyone will have an email server/web server/Diaspora node/lemmy instance/Mastodon server" all over again.


Local models are more like browsers than servers. The user doesn't care where they're hosted, they click an icon and ask questions either way.

People do care about the privacy of these things though. It's one thing to talk about encryption, but users are pouring out their heart and soul to these things, and they're not all idiots.

That would be accepting the framing of your class enemy, there is no reason to do that.

unless they are also pirate LLMs, I don't see how any open source project could have pockets deep enough for the datacenters needed to seriously contend

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: